I am simulating 1D heat conduction using brownian motion in python. The question here is to track if the particle pass the inerface of the left or right cell. I have to count it somehow, could you please purpose solution or I should update code concept(rethink model).
Short desciption: Medium is consist of cell, in each cell it has own quantity of particles. The particles are moving from one cell to another. First and last cell has constant quantity of particle (in this case 500 and 0). Result gives the tempertaure profile along x. If we know the number of particles that are pass the interface of the cell (left or right) we might find Heat flux.
Edit: I've made counting of particle pass throught interface (from right or left). But in "theory" the value of Heat Flux should me constant. So, I gues there is the problem with my code(specified code excerpt). Could you please review it. Am I counting right?
Edit code:
import numpy as np
import matplotlib.pyplot as plt
def Cell_dist(a, dx):
res = [[] for i in range(N)]
for i in range(N):
for value in a:
if dx*i < value < dx*i+dx:
res[i].append(value)
return res
L = 0.2 # length of the medium
N = 10 # number of cells
dx = L/N # cell dimension
dt = 1 # time step
dur = 60 # duration
M = 20 # number of particles
ro = 8930 # density
k = 391 # thermal conductivity
C_p = 380 # specific heat capacity
T_0 = 100 # maintained temperature
T_r = 0 # reference temperature
DT = T_0 - T_r # characteristic temperature
a = k / ro / C_p # thermal diffusivity of the copper
dh_r = ro * C_p * dx * DT / M # refernce elementary enthalpy
c = (2*a*dt)**(1/2) # diffusion length
pos = [[] for i in range(N)] # creating cells
for time_step in range(dur):
M_plus = [[] for i in range(N-1)]
M_minus = [[] for i in range(N-1)]
flux = [0] * (N-1)
unirnd = np.random.uniform(0,dx, M).tolist() # uniform random distribution
pos[0] = unirnd # 1st cell BC
pos[-1] = [] # last cell BC at T_r = 0
curr_pos = sorted([x for sublist in pos for x in sublist]) # flatten list of particles (concatenation)
M_t = len(curr_pos) # number of particles at instant time step
pos_tmp = []
# Move each particle
for i in range(M_t):
normal_distr = np.random.default_rng().normal(0,1)
displacement = c*normal_distr
final_pos = curr_pos[i] + displacement
pos_tmp.append(final_pos)
#HERE is the question________________________
if normal_distr > 0:
for i in range(1,len(M_plus)-1):
if i*dx < final_pos < (i+1)*dx:
M_plus[i-1].append(1)
else:
for i in range(len(M_minus)):
if i*dx < final_pos < (i+1)*dx:
M_minus[i].append(1)
#END of the question________________________
for i in range(N-1):
flux[i] = (len(M_plus[i])-len(M_minus[i]))*dh_r/dt/1000
pos_new = Cell_dist(pos_tmp,dx)
pos_new[0] = unirnd
pos = pos_new
walker_number_in_cell = []
for i in range(N):
walker_number_in_cell.append(len(pos[i]))
T_n = []
for num in walker_number_in_cell:
T_n.append(T_r + num*dh_r/ro/C_p/dx)
# __________Plotting FLUX profile__________
x_a = [0]*(N-1)
for i in range(0, N-1):
x_a[i] = "{}".format(i+1)+"_{}".format(i+2)
plt.plot(x_a,flux[0:N-1],'-')
plt.xlabel('Interface')
plt.ylabel('Flux')
plt.show()
Related
I want to get my code to loop so that every time it performs the calculation, it adds basically does a cumulative sum for my variable delta_omega. i.e. for every calculation, it takes the previous values in the delta_omega array, adds them together and uses that value to perform the calculation again and so on. I'm really not sure how to go about this as I want to plot these results too.
import numpy as np
import matplotlib.pyplot as plt
delta_omega = np.linspace(-900*10**6, -100*10**6, m) #Hz - range of frequencies
i = 0
while i<len(delta_omega):
delta = delta_omega[i] - (k*v_cap) + (mu_eff*B)/hbar
p_ee = (s0*L/2) / (1 + s0 + (2*delta/L)**2) #population of the excited state
R = L * p_ee # scattering rate
F = hbar*k*(R) #scattering force on atoms
a = F/m_Rb #acceleration assumed constant
vf_slower = (v_cap**2 - (2*a*z0))**0.5 #velocity at the end of the slower
t_d = 1/a * (v_cap - vf_slower) #time taken during slower
# -------- After slower --------
da = 0.1 #(m) distance from end of slower to the middle of the MOT
vf_MOT = (vf_slower**2 - (2*a*da))**0.5 #(m/s) - velocity of the particles at MOT center
t_a = da/vf_MOT #(s) time taken after slower
r0 = 0.01 #MOT capture radius
vr_max = r0/(t_b+t_d+t_a) #maximum transveral velocity
vz_max = (v_cap**2 + 2*a_max*z0)**0.5 #m/s - maximum axial velocity
# -------- Flux of atoms captured --------
P = 10**(4.312-(4040/T)) #vapour pressure for liquid phase (use 4.857 for solid phase)
A = 5*10**-4 #area of the oven aperture
n = P/(k_b*T) #atomic number density
f_oven = ((n*A)/4) * (2/(np.pi)**0.5) * ((2*k_b*T)/m_Rb)**0.5
f = f_oven * (1 - np.exp(-vr_max**2/vp**2))*(1 - np.exp(-vz_max**2/vp**2))
i+=1
plt.plot(delta_omega, f)
A simple cumulative sum would be defining the variable outside the loop and adding to it
i = 0
x = 0
while i < 10:
x = x + 5 #do your work on the cumulative value here
i += 1
print("cumulative sum: {}".format(x))
so define a variable that will contain the cumulative sum, and every loop, add to it
I'm using k-prototyps library for mixed numerical and numinal data type. According to https://github.com/nicodv/kmodes/issues/46
to calculate the silhouette score in k prototypes, I calculate the silhouette score of categorical data (based on hamming distance) and the silhouette score of numerical data (based on euclidean distance), but the developed code is Pretty slow and it takes 10h to calculate silhouette for 60000 records. My laptop has 12G Ram and corei 7.
Any help to improve the speed of code, please?
import numpy as np
import pandas as pd
from kmodes.kprototypes import KPrototypes
# -------- import data
df = pd.read_csv(r'C:\Users\data.csv')
# ------------- Normalize the data ---------------
# print(df.columns) # To get columns name
x_df = df[['R', 'F']]
x_df_norm = x_df.apply(lambda x: (x - x.min(axis=0)) / (x.max(axis=0) - x.min(axis=0)))
x_df_norm['COType'] = df[['COType']]
def calc_euclian_dis(_s1, _s2):
# s1 = np.array((3, 5))
_eucl_dist = np.linalg.norm(_s2 - _s1) # calculate Euclidean distance, accept input an array [2 6]
return _eucl_dist
def calc_simpleMatching_dis(_s1, _s2):
_cat_dist = 0
if (_s1 != _s2):
_cat_dist = 1
return _cat_dist
k = 3
# calculate silhoutte for one cluster number
kproto = KPrototypes(n_clusters=k, init='Cao', verbose=2)
clusters_label = kproto.fit_predict(x_df_norm, categorical=[2])
_identical_cluster_labels = list(dict.fromkeys(clusters_label))
# Assign clusters lables to the Dataset
x_df_norm['Cluster_label'] = clusters_label
# ------------- calculate _silhouette_Index -------------
# 1. Calculate ai
_silhouette_Index_arr = []
for i in x_df_norm.itertuples():
_ai_cluster_label = i[-1]
# return samples of the same cluster
_samples_cluster = x_df_norm[x_df_norm['Cluster_label'] == _ai_cluster_label]
_dist_array_ai = []
_s1_nume_att = np.array((i[1], i[2]))
_s1_cat_att = i[3]
for j in _samples_cluster.itertuples():
_s2_nume_att = np.array((j[1], j[2]))
_s2_cat_att = j[3]
_euclian_dis = calc_euclian_dis(_s1_nume_att, _s2_nume_att)
_cat_dis = calc_simpleMatching_dis(_s1_cat_att, _s2_cat_att)
_dist_array_ai.append(_euclian_dis + (kproto.gamma * _cat_dis))
ai = np.average(_dist_array_ai)
# 2. Calculate bi
# 2.1. determine the samples of other clusters
_identical_cluster_labels.remove(_ai_cluster_label)
_dic_cluseter = {}
_bi_arr = []
for ii in _identical_cluster_labels:
_samples = x_df_norm[x_df_norm['Cluster_label'] == ii]
# 2.2. calculate bi
_dist_array_bi = []
for j in _samples.itertuples():
_s2_nume_att = np.array((j[1], j[2]))
_s2_cat_att = j[3]
_euclian_dis = calc_euclian_dis(_s1_nume_att, _s2_nume_att)
_cat_dis = calc_simpleMatching_dis(_s1_cat_att, _s2_cat_att)
_dist_array_bi.append(_euclian_dis + (kproto.gamma * _cat_dis))
_bi_arr.append(np.average(_dist_array_bi))
_identical_cluster_labels.append(_ai_cluster_label)
# min bi is determined as final bi variable
bi = min(_bi_arr)
# 3. calculate silhouette Index
if ai == bi:
_silhouette_i = 0
elif ai < bi:
_silhouette_i = 1 - (ai / bi)
elif ai > bi:
_silhouette_i = 1 - (bi / ai)
_silhouette_Index_arr.append(_silhouette_i)
silhouette_score = np.average(_silhouette_Index_arr)
print('_silhouette_Index = ' + str(silhouette_score))
Hei! I reimplemented your function by using the linear algebra operators for computing dissimilarities instead of using a lot of for loops:
It is way faster :-)
def euclidean_dissim(a, b, **_):
"""Euclidean distance dissimilarity function
b is the single point, a is the matrix of vectors"""
if np.isnan(a).any() or np.isnan(b).any():
raise ValueError("Missing values detected in numerical columns.")
return np.linalg.norm(a - b, axis=1)
def matching_dissim(a, b, **_):
"""Simple matching dissimilarity function
b is the single point, a is the matrix of all other vectors,
count how many matching values so difference = 0 """
# We are subtracting to dimension since is not similarity but a dissimilarity
dimension = len(b)
return dimension - np.sum((b-a)==0,axis=1)
def calc_silhouette_proto(dataset,numerical_pos, cat_pos,kproto_model):
'''------------- calculate _silhouette_Index -------------'''
# 1. Compute a(i)
silhouette_Index_arr = []
for i in dataset.itertuples():
# convert tuple to np array
i = np.array(i)
unique_cluster_labels = list(np.unique(dataset['cluster_labels']))
# We need each time to remove the considered tuple from the dataset since we don't compute distances from itself
data = dataset.copy()
ai_cluster = i[-1] # The cluster is in the last position of the tuple
# Removing the tuple from the dataset
tuple_index = dataset.index.isin([i[0]])
data = data[~tuple_index]
# Get samples of the same cluster
samples_of_cluster = data[data['cluster_labels'] == ai_cluster].loc[:,data.columns!='cluster_labels'].to_numpy()
# Compute the 2 distances among the single points and all the others
euclidian_distances = euclidean_dissim(samples_of_cluster[:,numerical_pos],i[np.array(numerical_pos)+1])
categ_distances = matching_dissim(samples_of_cluster[:,cat_pos],i[np.array(cat_pos)+1])
# Weighted average of the 2 distances
ai = np.average(euclidian_distances) + (kproto_model.gamma * np.average(categ_distances))
# 2. Calculate bi
unique_cluster_labels.remove(ai_cluster)
bi_arr = []
for ii in unique_cluster_labels:
# Get all the samples of cluster ii
samples = data[data['cluster_labels'] == ii].loc[:,data.columns!='cluster_labels'].to_numpy()
# Compute the 2 distances among the single points and all the others
euclidian_distances = np.linalg.norm(samples[:,numerical_pos] - i[np.array(numerical_pos)+1], axis=1)
categ_distances = matching_dissim(samples[:,cat_pos],i[np.array(cat_pos)+1])
distance_bi = np.average(euclidian_distances) + (kproto_model.gamma * np.average(categ_distances))
bi_arr.append(np.average(distance_bi))
# min bi is determined as final bi variable
if(len(bi_arr)==0):
bi = 0
else:
bi = min(bi_arr)
# 3. calculate silhouette Index
if ai == bi:
silhouette_i = 0
elif ai < bi:
silhouette_i = 1 - (ai / bi)
elif ai > bi:
silhouette_i = 1 - (bi / ai)
silhouette_Index_arr.append(silhouette_i)
silhouette_score = np.average(silhouette_Index_arr)
return silhouette_score
I'm trying to calculate the mean value of a quantity(in the form of a 2D array) as a function of its distance from the center of a 2D grid. I understand that the idea is that I identify all the array elements that are at a distance R from the center, and then add them up and divide by the number of elements. However, I'm having trouble actually identifying an algorithm to go about doing this.
I have attached a working example of the code to generate the 2d array below. The code is for calculating some quantities that are resultant from gravitational lensing, so the way the array is made is irrelevant to this problem, but I have attached the entire code so that you could create the output array for testing.
import numpy as np
import multiprocessing
import matplotlib.pyplot as plt
n = 100 # grid size
c = 3e8
G = 6.67e-11
M_sun = 1.989e30
pc = 3.086e16 # parsec
Dds = 625e6*pc
Ds = 1726e6*pc #z=2
Dd = 1651e6*pc #z=1
FOV_arcsec = 0.0001
FOV_arcmin = FOV_arcsec/60.
pix2rad = ((FOV_arcmin/60.)/float(n))*np.pi/180.
rad2pix = 1./pix2rad
Renorm = (4*G*M_sun/c**2)*(Dds/(Dd*Ds))
#stretch = [10, 2]
# To create a random distribution of points
def randdist(PDF, x, n):
#Create a distribution following PDF(x). PDF and x
#must be of the same length. n is the number of samples
fp = np.random.rand(n,)
CDF = np.cumsum(PDF)
return np.interp(fp, CDF, x)
def get_alpha(args):
zeta_list_part, M_list_part, X, Y = args
alpha_x = 0
alpha_y = 0
for key in range(len(M_list_part)):
z_m_z_x = (X - zeta_list_part[key][0])*pix2rad
z_m_z_y = (Y - zeta_list_part[key][1])*pix2rad
alpha_x += M_list_part[key] * z_m_z_x / (z_m_z_x**2 + z_m_z_y**2)
alpha_y += M_list_part[key] * z_m_z_y / (z_m_z_x**2 + z_m_z_y**2)
return (alpha_x, alpha_y)
if __name__ == '__main__':
# number of processes, scale accordingly
num_processes = 1 # Number of CPUs to be used
pool = multiprocessing.Pool(processes=num_processes)
num = 100 # The number of points/microlenses
r = np.linspace(-n, n, n)
PDF = np.abs(1/r)
PDF = PDF/np.sum(PDF) # PDF should be normalized
R = randdist(PDF, r, num)
Theta = 2*np.pi*np.random.rand(num,)
x1= [R[k]*np.cos(Theta[k])*1 for k in range(num)]
y1 = [R[k]*np.sin(Theta[k])*1 for k in range(num)]
# Uniform distribution
#R = np.random.uniform(-n,n,num)
#x1= np.random.uniform(-n,n,num)
#y1 = np.random.uniform(-n,n,num)
zeta_list = np.column_stack((np.array(x1), np.array(y1))) # List of coordinates for the microlenses
x = np.linspace(-n,n,n)
y = np.linspace(-n,n,n)
X, Y = np.meshgrid(x,y)
M_list = np.array([0.1 for i in range(num)])
# split zeta_list, M_list, X, and Y
zeta_list_split = np.array_split(zeta_list, num_processes, axis=0)
M_list_split = np.array_split(M_list, num_processes)
X_list = [X for e in range(num_processes)]
Y_list = [Y for e in range(num_processes)]
alpha_list = pool.map(
get_alpha, zip(zeta_list_split, M_list_split, X_list, Y_list))
alpha_x = 0
alpha_y = 0
for e in alpha_list:
alpha_x += e[0]
alpha_y += e[1]
alpha_x_y = 0
alpha_x_x = 0
alpha_y_y = 0
alpha_y_x = 0
alpha_x_y, alpha_x_x = np.gradient(alpha_x*rad2pix*Renorm,edge_order=2)
alpha_y_y, alpha_y_x = np.gradient(alpha_y*rad2pix*Renorm,edge_order=2)
det_A = 1 - alpha_y_y - alpha_x_x + (alpha_x_x)*(alpha_y_y) - (alpha_x_y)*(alpha_y_x)
abs = np.absolute(det_A)
I = abs**(-1.)
O = np.log10(I+1)
plt.contourf(X,Y,O,100)
The array of interest is O, and I have attached a plot of how it should look like. It can be different based on the random distribution of points.
What I'm trying to do is to plot the mean values of O as a function of radius from the center of the grid. In the end, I want to be able to plot the average O as a function of distance from center in a 2d line graph. So I suppose the first step is to define circles of radius R, based on X and Y.
def circle(x,y):
r = np.sqrt(x**2 + y**2)
return r
Now I just have to figure out a way to find all the values of O, that have the same indices as equivalent values of R. Kinda confused on this part and would appreciate any help.
You can find the geometric coordinates of a circle with center (0,0) and radius R as such:
phi = np.linspace(0, 1, 50)
x = R*np.cos(2*np.pi*phi)
y = R*np.sin(2*np.pi*phi)
these values however will not fall on the regular pixel grid but in between.
In order to use them as sampling points you can either round the values and use them as indexes or interpolate the values from the near pixels.
Attention: The pixel indexes and the x, y are not the same. In your example (0,0) is at the picture location (50,50).
I am trying to analyse a wave on a string by solving the wave equation with Python. Here are my requirements for the solution.
1) I model reflective ends by using much larger masses on first and last point on the string -> Large inertia
2)No spring on edges. Then k[0] and k[-1] will be ZERO.
I have problem with indices. In my 2nd loop I get y[i,j-1], k[i-1], y[i-1,j]. In the first iteration the loop will then use y[0,-1], k[-1], y[-1,0]. These are my last points and not my first points. How can I avoid this problem?
What you need is initiating your mass array with one additional element. I mean
...
m = [mi]*N # mass array [!!!] instead of (N-1) [!!!]
...
Idem for your springs
...
k = [ki]*N
...
Consequently, you can keep k[0] equal to 10. since you model reflective ends. You may thus want to comment or drop this line
...
##k[0] = 0
...
For aesthetic considerations you may want to fill the gap at the end of the x-axis. In this case, simply do
N = 201 # Number of mass points
Your code thus becomes
from numpy import *
from matplotlib.pyplot import *
# Variables
N = 201 # Number of mass points
nT = 1200 # Number of time points
mi = 0.02 # mass in kg
m = [mi]*N # mass array
m[-1] = 100 # Large last mass reflective edges
m[0] = 100 # Large first mass reflective edges
ki = 10.#spring
k = [ki]*N
k[-1] = 0
dx = 0.2
kappa = ki*dx
my = mi/dx
c = sqrt(kappa/my) # velocity
dt = 0.04
# 3 vectors
x = arange( N )*dx # x points
t = arange( N )*dt # t points
y = zeros( [N, nT ] )# 2D array
# Loop over initial condition
for i in range(N-1):
y[i,0] = sin((7.*pi*i)/(N-1)) # Initial condition dependent on mass point
# Iterating over time and position to find next position of wave
for j in range(nT-1):
for i in range(N-1):
y[i,j+1] = 2*y[i,j] - y[i,j-1] + (dt**2/m[i])*(k[i-1]*y[i+1,j] -2*k[i-1]*y[i,j] + k[i]*y[i-1,j] )
#check values of edges
print y[:2,j+1],y[-2:,j+1]
# Creates an animation
cla()
ylabel("Amplitude")
xlabel("x")
ylim(-10,10)
plot(x,y[:,j-2])
pause(0.001)
close()
which produces
Following your comment, I think that if you want to see the wave traveling along the string before reflection, you should not initiate conditions everywhere (in space). I mean, e.g., doing
...
# Loop over initial condition
for i in range(N-1):
ci_i = sin(7.*pi*i/(N-1)) # Initial condition dependent on mass point
if np.sign(ci_i*y[i-1,0])<0:
break
else:
y[i,0] = ci_i
...
produces
New attempt after answers:
from numpy import *
from matplotlib.pyplot import *
N = 201
nT = 1200
mi = 0.02
m = [mi]*(N)
m[-1] = 1000
m[0] = 1000
ki = 10.
k = [ki]*N
dx = 0.2
kappa = ki*dx
my = mi/dx
c = sqrt(kappa/my)
dt = 0.04
x = arange( N )*dx
t = arange( N )*dt
y = zeros( [N, nT ] )
for i in range(N-1):
y[i,0] = sin((7.*pi*i)/(N-1))
for j in range(nT-1):
for i in range(N-1):
if j == 0: # if j = 0 then ... y[i,j-1]=y[i,j]
y[i,j+1] = 2*y[i,j] - y[i,j] + (dt**2/m[i])*(k[i-1]*y[i+1,j] -2*k[i-1]*y[i,j] + k[i]*y[i-1,j] )
else:
y[i,j+1] = 2*y[i,j] - y[i,j-1] + (dt**2/m[i])*( k[i-1]*y[i+1,j] -2*k[i-1]*y[i,j] + k[i]*y[i-1,j] )
cla()
ylim(-1,1)
plot(x,y[:,j-2])
pause(0.0001)
ylabel("Amplitude")
xlabel("x")
print len(x), len(t), N,nT
Here is a plot of the new attempt at solution with |amplitude| of anti node equal 1.0. Will this do anything with further solving the issue with indices?
I have drawn one position(x,y,z) of N particles in an enclosed volume.
x[i] = random.uniform(a,b) ...
I also found the constant velocity(vx,vy,vz) of the N particles.
vx[i] = random.gauss(mean,sigma) ...
Now I want to find the position of the N(=100) particles over time. I used the Euler-Cromer method to this.
delta_t = linspace(0,2,n-1)
n = 1000
v[0] = vx;...
r[0] = x;...
for i in range(n-1):
v[i+1,:] = v[i,:]
r[i+1,:] = r[i,:] + delta_t*v[i+1,:]
t[i+1] = t[i] + delta_t
But I want to find the position over time for every particle. How can I do this? Also, how do I plot the particles position over time in 3D?
To find the position of the particles at a given time you can use the following code:
import numpy as np
# assign random positions in the box 0,0,0 to 1,1,1
x = np.random.random((100,3))
# assign random velocities in the range around 0
v = np.random.normal(size=(100,3))
# define function to project the position in time according to
# laws of motion. x(t) = x_0 + v_0 * t
def position(x_0, v_0, t):
return x_0 + v_0*t
# get new position at time = 3.2
position(x, v, 3.2)