I've trying to make a julia set in python but my output is Nan at some early process. I don't know what causes it.
Just for the sake of confession: my programming classes are not good, I don't really know what I am doing, this is mostly from what I've learned from Google.
Here's the code:
import matplotlib.pyplot as plt
c = complex(1.5,-0.6)
xli = []
yli = []
while True:
z = c
for i in range(1,101):
if abs(z) > 2.0:
break
z = z*z + c
if i>0 and i <100:
break
xi = -1.24
xf = 1.4
yi = -2.9
yf = 2.1
#the loop for the julia set
for k in range(1,51):
x = xi + k*(xf-xi)/50
for n in range(51):
y = yi + n*(yf-yi)/50
z = z+ x + y* 1j
print z
for i in range(51):
z = z*z + c #the error is coming from somewhere around here
if abs(z) > 2: #not sure if this is correct
xli.append(x)
yli.append(y)
plt.plot(xli,yli,'bo')
plt.show()
print xli
print yli
Thank you in advance :)
Just for the sake of confession: I know nothing about Julia sets nor matplotlib.
pyplot seems an odd choice due to its low resolution and the fact that colors can't be specified as a vector alongside X & Y. And had it worked as written, 'bo' would have produced just a grid of blue circles.
Your first while True: loop isn't needed as you've picked what you believe to be a viable c.
Here's my rework of your code:
import matplotlib.pyplot as plt
c = complex(1.5, -0.6)
# image size
img_x = 100
img_y = 100
# drawing area
xi = -1.24
xf = 1.4
yi = -2.9
yf = 2.1
iterations = 8 # maximum iterations allowed (maps to 8 shades of gray)
# the loop for the julia set
results = {} # pyplot speed optimization to plot all same gray at once
for y in range(img_y):
zy = y * (yf - yi) / (img_y - 1) + yi
for x in range(img_x):
zx = x * (xf - xi) / (img_x - 1) + xi
z = zx + zy * 1j
for i in range(iterations):
if abs(z) > 2:
break
z = z * z + c
if i not in results:
results[i] = [[], []]
results[i][0].append(x)
results[i][1].append(y)
for i, (xli, yli) in results.items():
gray = 1.0 - i / iterations
plt.plot(xli, yli, '.', color=(gray, gray, gray))
plt.show()
OUTPUT
Related
Is there a performant way to directly solve for the most likely intersection point (X, Y) of several multivariable Gaussians?
I've seen a few posts here that have asked how to solve for the intersection between two Gaussians - the concept is familiar to me. Right now it's not obvious to me aside from iterating and solving for two distributions at a time.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
mus = [np.array([[0.3],[0.7]]),
np.array([[0.3],[0.2]]),
np.array([[1.5],[0.6]])]
covs = [np.array([[0.85, 0.3], [0.3, 0.25]]),
np.array([[0.7, -0.41], [-0.41, 0.25]]),
np.array([[0.5, 0.15], [0.15, 0.15]])]
cmaps = ["Reds", "Blues", "Greens"]
for m, cov, c in zip(mus, covs, cmaps):
cov_inv = np.linalg.inv(cov)
cov_det = np.linalg.det(cov)
x = np.linspace(-3, 3)
y = np.linspace(-3, 3)
X,Y = np.meshgrid(x,y)
coe = 1.0 / ((2 * np.pi)**2 * cov_det)**0.5
Z = coe * np.e ** (-0.5 * (cov_inv[0,0]*(X-m[0])**2 + (cov_inv[0,1] + cov_inv[1,0])*(X-m[0])*(Y-m[1]) + cov_inv[1,1]*(Y-m[1])**2))
plt.contour(X,Y,Z, cmap = c)
You can do a LOT better than iterating between 2 solutions at a time. Realize that at every (x, y) point, you have a Z value for all 3 curves, and at the 3-way intersecting point, they are all equal (or within tolerance). And at other points, if you take the lowest Z of the curves, and move towards the center (mu_x, mu_y) of that curve, you are moving in an improving direction.
The below is an iterative algorithm that does that. There is certainly some meat on the bone in terms of possible enhancements. Notably, you could incorporate a "tolerance" for stopping conditions easily, or do some weighted average of the 2 lower z values instead of just the lowest to get the movement vector, or tinker with a larger step size.
Anyhow, this converges very rapidly for many different test starting points.
Code:
import numpy as np
import matplotlib.pyplot as plt
class Curve:
# a convenience so we can avoid recomputations
def __init__(self, mu, cov_inv, cov_det):
self.mu = mu
self.cov_inv = cov_inv
self.cov_det = cov_det
self.coe = 1.0 / ((2 * np.pi)**2 * cov_det)**0.5
def z(self, x, y):
Z = self.coe * np.e ** (-0.5 * (self.cov_inv[0,0]*(x-self.mu[0])**2 + \
(self.cov_inv[0,1] + self.cov_inv[1,0])*(x-self.mu[0])*(y-self.mu[1]) + self.cov_inv[1,1]*(y-self.mu[1])**2))
return Z
mus = [np.array([[0.3],[0.7]]),
np.array([[0.3],[0.2]]),
np.array([[1.5],[0.6]])]
covs = [np.array([[0.85, 0.3], [0.3, 0.25]]),
np.array([[0.7, -0.41], [-0.41, 0.25]]),
np.array([[0.5, 0.15], [0.15, 0.15]])]
cmaps = ["Reds", "Blues", "Greens"]
curves = []
for m, cov, c in zip(mus, covs, cmaps):
cov_inv = np.linalg.inv(cov)
cov_det = np.linalg.det(cov)
x = np.linspace(-3, 3)
y = np.linspace(-3, 3)
X,Y = np.meshgrid(x,y)
coe = 1.0 / ((2 * np.pi)**2 * cov_det)**0.5
Z = coe * np.e ** (-0.5 * (cov_inv[0,0]*(X-m[0])**2 + (cov_inv[0,1] + cov_inv[1,0])*(X-m[0])*(Y-m[1]) + cov_inv[1,1]*(Y-m[1])**2))
plt.contour(X,Y,Z, cmap = c)
curves.append(Curve(m, cov_inv, cov_det))
# iterative algorithm...
pos = np.array((-1,2))
step_size = 0.1
num_steps = 100
footprints = [pos,]
for step in range(num_steps):
zs = [ (curves[i].z(*pos), i) for i in range(len(curves))]
zs.sort() # sort by z value, lowest will be first
c = curves[zs[0][1]] # the curve to move toward
vec = c.mu.T - pos
move_vec = vec * (step_size/np.linalg.norm(vec))
print(f'move: {move_vec} towards curve {zs[0][1]}')
pos = pos + move_vec
pos = pos.flatten()
# check to see if we have backtracked, if so, shorten the step
if len(footprints) > 1 and np.linalg.norm(pos - footprints[-2]) < step_size:
#print(f'norm: {np.linalg.norm(pos-footprints[-2])}')
step_size *= 0.5
footprints.append(pos)
plt.plot([t[0] for t in footprints], [t[1] for t in footprints], c='k', lw=2)
plt.show()
Plot:
I was about to plot a Poincare section of the following DE, which is quite meaningful to have a periodic potential function V(x) = - cos(x) in this equation.
After calculating the solution using RK4 with time interval dt = 0.001, the one that python drew was as the following plot.
But according to the textbook(referred to 2E by J.M.T. Thompson and H.B. Stewart), the section would look like as
:
it has so much difference. For my personal opinion, since Poincare section does not appear as what writers draw, there must be some error in my code. However, I actually done for other forced oscillation DE, including Duffing's equation, and obtained the identical one as those in the textbook. So, I was wodering if there are some typos in the equation given by the textbook, or somewhere else. I posted my code, but might be quite messy to understand. So appreicate dealing with it.
import numpy as np
import matplotlib.pylab as plt
import matplotlib as mpl
import sys
import time
state = [1]
def print_percent_done(index, total, state, title='Please wait'):
percent_done2 = (index+1)/total*100
percent_done = round(percent_done2, 1)
print(f'\t⏳{title}: {percent_done}% done', end='\r')
if percent_done2 > 99.9 and state[0]:
print('\t✅'); state = [0]
####
no = 1
####
def multiple(n, q):
m = n; i = 0
while m >= 0:
m -= q
i += 1
return min(abs(n - (i - 1)*q), abs(i*q - n))
# system(2)
#Basic info.
filename = 'sinPotentialWell'
# a = 1
# alpha = 0.01
# w = 4
w0 = .5
n = 1000000
h = .01
t_0 = 0
x_0 = 0.1
y_0 = 0
A = [(t_0, x_0, y_0)]
def f(t, x, y):
return y
def g(t, x, y):
return -0.5*y - np.sin(x) + 1.1*np.sin(0.5*t)
for i in range(n):
t0 = A[i][0]; x0 = A[i][1]; y0 = A[i][2]
k1 = f(t0, x0, y0)
u1 = g(t0, x0, y0)
k2 = f(t0 + h/2, x0 + h*k1/2, y0 + h*u1/2)
u2 = g(t0 + h/2, x0 + h*k1/2, y0 + h*u1/2)
k3 = f(t0 + h/2, x0 + h*k2/2, y0 + h*u2/2)
u3 = g(t0 + h/2, x0 + h*k2/2, y0 + h*u2/2)
k4 = f(t0 + h, x0 + h*k3, y0 + h*u3)
u4 = g(t0 + h, x0 + h*k3, y0 + h*u3)
t = t0 + h
x = x0 + (k1 + 2*k2 + 2*k3 + k4)*h/6
y = y0 + (u1 + 2*u2 + 2*u3 + u4)*h/6
A.append([t, x, y])
if i%1000 == 0: print_percent_done(i, n, state, 'Solving given DE')
#phase diagram
print('showing 3d_(x, y, phi) graph')
PHI=[[]]; X=[[]]; Y=[[]]
PHI_period1 = []; X_period1 = []; Y_period1 = []
for i in range(n):
if w0*A[i][0]%(2*np.pi) < 1 and w0*A[i-1][0]%(2*np.pi) > 6:
PHI.append([]); X.append([]); Y.append([])
PHI_period1.append((w0*A[i][0])%(2*np.pi)); X_period1.append(A[i][1]); Y_period1.append(A[i][2])
phi_period1 = np.array(PHI_period1); x_period1 = np.array(X_period1); y_period1 = np.array(Y_period1)
print('showing Poincare Section at phi=0')
plt.plot(x_period1, y_period1, 'gs', markersize = 2)
plt.plot()
plt.title('phi=0 Poincare Section')
plt.xlabel('x'); plt.ylabel('y')
plt.show()
If you factor out some of the computation blocks, you can make the code more flexible and computations more direct. No need to reconstruct something if you can construct it in the first place. You want to catch the points where w0*t is a multiple of 2*pi, so just construct the time loops so you integrate in chunks of 2*pi/w0 and only remember the interesting points.
num_plot_points = 2000
h = .01
t,x,y = t_0,x_0,y_0
x_section,y_section = [],[]
T = 2*np.pi/w0
for k in range(num_plot_points):
t = 0;
while t < T-1.2*h:
x,y = RK4step(t,x,y,h)
t += h
x,y = RK4step(t,x,y,T-t)
if k%100 == 0: print_percent_done(k, num_plot_points, state, 'Solving given DE')
x_section.append(x); y_section.append(y)
with RK4step just containing the code of the RK4 step.
This will not solve the mystery. The veil gets lifted if you consider that x is the angle theta (of a forced pendulum with friction) on a circle. Thus to get points with the same spacial location it needs to be reduced by multiples of 2*pi. Doing that,
plt.plot([x%(2*np.pi) for x in x_section], y_section, 'gs', markersize = 2)
results in the expected plot
I want to generate random coordinates for spheres in a box geometry. I'm using while loop and i have 2 condition. First one is the distance of coordinates. General distance formula was used so that the coordinates do not overlap. Second one is the porosity. When porosity is less than 0.45 generating should stop. My code is working correctly but when i reduce porosity condition less than 0.80 the algorithm stucks. It cannot reach that porosity even after hours. How can I improve it to generate coordinates faster? Any suggestions are appreciated.
#dist = math.sqrt(((x2-x1)**2) + ((y2-y1)**2) + ((z2-z1)**2))
import math
import random
import numpy as np
import matplotlib.pyplot as plt
A = 0.04 # x border.
B = 0.04 # y border.
C = 0.125 # z border.
V_total = A*B*C # volume
r = 0.006 # min distance of spheres.
radius = 0.003 # radius of spheres.
wall_distance = 0.003
Porosity = 1.0
coordinates = np.array([])
while Porosity >= 0.90:
# coordinates
x = random.uniform(wall_distance, A-wall_distance)
y = random.uniform(wall_distance, B-wall_distance)
z = random.uniform(wall_distance, C-wall_distance)
coord1 = (x,y,z)
if coordinates.shape[0] == 0: # add first one without condition
coordinates = np.array([coord1])
else:
coordinates = np.vstack((coordinates, coord1))
# seperate x,y,z and convert list for control
d_x = coordinates[:,0]
x = d_x.tolist()
d_y = coordinates[:,1]
y = d_y.tolist()
d_z = coordinates[:,2]
z = d_z.tolist()
for j in range(len(y)):
for k in range(j+1, len(z)):
dist = math.sqrt(((x[j]-x[k])**2) + ((y[j]-y[k])**2) + ((z[j]-z[k])**2))
if dist <= r:
coordinates = coordinates[:-1, :] # if distance is less than r, remove last coordinate
# check porosity
V_spheres = (4/3) * (np.pi) * (radius**3) * len(coordinates)
V_void = V_total - V_spheres
Porosity = V_void / V_total
print("Porosity: {}".format(Porosity))
print("Number of spheres: {}".format(len(coordinates)))
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.set_xlim([0, A])
ax.set_ylim([0, B])
ax.set_zlim([0, C])
ax.set_title('Coordinates for spheres')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
p = ax.scatter(coordinates[:,0], coordinates[:,1], coordinates[:,2])
fig.colorbar(p)
plt.show()
There are a number of things you can do to improve your performance here. See my modified code below, with explanations
import math
import random
import numpy as np
import matplotlib.pyplot as plt
A = 0.04 # x border.
B = 0.04 # y border.
C = 0.125 # z border.
V_total = A*B*C # volume
r = 0.006 # min distance of spheres.
radius = 0.003 # radius of spheres.
wall_distance = 0.003
Porosity = 1.0
coordinates = np.empty((0,3)) # initialize array with correct shape
while Porosity >= 0.70:
# coordinates
x = random.uniform(wall_distance, A-wall_distance)
y = random.uniform(wall_distance, B-wall_distance)
z = random.uniform(wall_distance, C-wall_distance)
is_invalid = (True in [
math.sqrt(((x - coordinates[i_coor,0])**2) +
((y - coordinates[i_coor,1])**2) +
((z - coordinates[i_coor,2])**2)) <= r
for i_coor in range(coordinates.shape[0]) ])
if not is_invalid:
coordinates = np.append(coordinates,[[x,y,z]], axis = 0)
else:
continue
V_spheres = (4/3) * (np.pi) * (radius**3) * len(coordinates)
V_void = V_total - V_spheres
Porosity = V_void / V_total
print(f"placed coord {len(coordinates)}, por = {Porosity}")
print("Porosity: {}".format(Porosity))
print("Number of spheres: {}".format(len(coordinates)))
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.set_xlim([0, A])
ax.set_ylim([0, B])
ax.set_zlim([0, C])
ax.set_title('Coordinates for spheres')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
p = ax.scatter(coordinates[:,0], coordinates[:,1], coordinates[:,2])
np.savetxt('out.csv', coordinates)
fig.colorbar(p)
plt.show()
the main thing I changed is this double for loop
for j in range(len(y)):
for k in range(j+1, len(z)):
dist = math.sqrt(((x[j]-x[k])**2) + ((y[j]-y[k])**2) + ((z[j]-z[k])**2))
This was checking every pair of points for overlap EACH TIME YOU ADD A SINGLE POINT. That took unnecessarily long. By only checking if the new point intersects with the old points, you reduce your runtime from O(n^3) to O(n^2). I was able to pretty quickly run this with 0.5 perosity.
the output of the study///// I just started learning Python, so I am new with python. I have written a simple code for 2D Heat Conduction. I don't know what is the problem with my code. The result is so strange.I think the Temperature distribution is not shown correctly. I have searched about it a lot but unfortunately I could not find any answer for my problem . can anyone help me?
Thank you
# Library
import numpy
from matplotlib import pyplot
# Grid Generation
nx = 200
ny = 200
dx = 2 / (nx-1)
dy = 2 / (ny-1)
# Time Step
nt = 50
alpha = 1
dt = 0.001
# Initial Condition (I.C) and Boundry Condition (B.C)
T = numpy.ones((nx, ny)) # I.C (U = Velocity)
x = numpy.linspace(0,2,nx) # B.C
y = numpy.linspace(0,2,ny) # B.C
Tn = numpy.empty_like(T) #initialize a temporary array
X, Y = numpy.meshgrid(x,y)
T[0, :] = 20 # B.C
T[-1,:] = -100 # B.C
T[:, 0] = 150 # B.C
T[:,-1] = 100 # B.C
# Solver
###Run through nt timesteps
for n in range(nt + 1):
Tn = T.copy()
T[1:-1, 1:-1] = (Tn[1:-1,1:-1] +
((alpha * dt / dx**2) *
(Tn[1:-1, 2:] - 2 * Tn[1:-1, 1:-1] + Tn[1:-1, 0:-2])) +
((alpha * dt / dy**2) *
(Tn[2:,1: -1] - 2 * Tn[1:-1, 1:-1] + Tn[0:-2, 1:-1])))
T[0, :] = 20 # From B.C
T[-1,:] = -100 # From B.C
T[:, 0] = 150 # From B.C
T[:,-1] = 100 # From B.C
fig = pyplot.figure(figsize=(11, 7), dpi=100)
pyplot.contourf(X, Y, T)
pyplot.colorbar()
pyplot.contour(X, Y, T)
pyplot.xlabel('X')
pyplot.ylabel('Y');
You are using a Forward Time Centered Space discretisation scheme to solve your heat equation which is stable if and only if alpha*dt/dx**2 + alpha*dt/dy**2 < 0.5. With your values for dt, dx, dy, and alpha you get
alpha*dt/dx**2 + alpha*dt/dy**2 = 19.8 > 0.5
Which means your numerical solution will diverge very quickly. To get around this you need to make dt smaller and/or dx and dy larger. For example, for dt=2.5e-5 and the rest as before you get alpha*dt/dx**2 + alpha*dt/dy**2 = 0.495, and the solution will look like this after 1000 iterations:
Alternatively, you could use a different discretisation scheme like for ex the API scheme which is unconditionally stable but will be harder to implement.
Beginner in python here.
300 points are randomly generated where x and y are between 0 and 1.
I need to count the number of points that are generated inside the unit circle and to also estimate pi using these points. I typically want the code to be similar to this (what I have so far):
import math
import random
points = 300
x = [random.random() for jj in range(points)]
y = [random.random() for xx in x]
count = 0
for xx, yy in zip(x,y) :
if xx**2 + yy**2 < 1:
do_not_count_it
else:
count_it
sum = all_points_in_unit_circle
Any suggestions on how to complete the code?
You were close
You just need a condition when the point is inside (no need for else:).
You inverted the condition (you count when < 1)
Variables sum and count are the same.
This was not a mistake, but use multiplication instead of exponentiation when possible.
Library math is unused. You could have used sqrt(), but since sqrt(1)==1, it would be useless.
Which gives:
import random
points = 300
x = [random.random() for jj in range(points)]
y = [random.random() for xx in x]
count = 0
for xx, yy in zip(x,y) :
if xx * xx + yy * yy < 1:
count += 1
print (count)
BTW, it works for pyhton2 and python3.
I'm not too familiar with Monte Carlo methods, but a quick read tells me that you should simply do
for xx, yy in zip(x,y) :
if xx**2 + yy**2 <= 1:
count+=1
And then just approximate pi like so
approxPi = 4.0 * count / points
you can also do it like this:
from random import random
num_of_points = 300
points = [(random(), random()) for _ in range(num_of_points)]
points_inside = sum(x**2 + y**2 < 1 for x, y in points)
Here the radius of circle is 0.5 and the area of circle is Pi*0.5^2 = Pi*0.25; and the square is 1x1 and the area of square is 1x1 = 1.
The #point_in_circle / #point_in_square = area_of_circle / area_of_square,
so we have circle_count / all_count = Pi*0.25 / 1
we get Pi = circle_count / all_count * 1 / 0.25
This code will print out the PI value:
import math
import random
points = 300
x = [random.random() for jj in range(points)]
y = [random.random() for xx in x]
count_circle = 0
for xx, yy in zip(x,y) :
if xx**2 + yy**2 <= 1:
count_circle += 1
pi = count_circle/points/0.25
print (pi)