I am currently simulation light passing through an optics system with python and Zemax. I have it set up currently where I define the x and y boundaries of the "sensor" to that i can choose the size of the area I want to simulate. I get 1 rectangle.
I'd like to simulate nine rectangles, in a 3x3 Grid. I am unsure which way would be the most elegant... my first Idea was to "hardcode" the different intervals into 9 different scripts and run those through a bash script, but it seemst a bit to "unelegant".
How do I have to define xmax, xmin, ymax, ymin now so that i can run the same simulation and get those nine retangles?
My thought was to maybe create some sort of list where the boundaries are defined, and then perhaps rerun the simulation with a different boundary each time and finally merging the images that appear.
The current code is quite long, but the parameters are all set in a main functions which looks like this:
if __name__ == '__main__':
DirNameZmx = r'C:\Some\Path'
FileNameZmx = r"Optics.zmx"
DirNameResults = r"C:\Some\Other\Path"
FileNameResults = r"Interferogram_Result"
(QueueFieldsOut, QueueToDetector, ProcessRaytracing, ProcessesPsfWorkers, ProcessDetector) = \
InitializeSimulation(DirNameZmx=DirNameZmx, FileNameZmx=FileNameZmx,
DirNameResults=DirNameResults, FileNameResults=FileNameResults,
FieldAngleHxMin=-0.02, FieldAngleHxMax=+0.02, dFieldAngleX=0.001,
FieldAngleHyMin=-0.06, FieldAngleHyMax=+0.06, dFieldAngleY=0.001,
NbrWavelength=1, Configurations=[1, 2], NbrRaysFieldRow=32, RAperture=0.99,
DetectorImageSize=11., DetectorPixelSize=0.011, ZeroPadding=8,
BatchRaysMax=512**2, NbrProcessWorkers=2)
print(ProcessRaytracing.join())
for Process in ProcessesPsfWorkers:
print(Process.join())
print(Process.name, Process.exitcode)
print(ProcessDetector.join())
data = np.load(os.path.join(DirNameResults, FileNameResults+'.npy'))
plt.imshow(data, cmap="coolwarm")
plt.show()
The FieldAngleHxMin/Max and FieldAngleHyMin/Max are the rectangle boundaries. The result looks like this:
Simple iteration will do the work.
Try this:
def nine_squares(FieldAngleHxMin, FieldAngleHxMax, FieldAngleHyMin, FieldAngleHyMax):
xstep = (FieldAngleHxMax - FieldAngleHxMin) / 3
ystep = (FieldAngleHyMax - FieldAngleHyMin) / 3
for i in range(3):
for j in range(3):
xstartpoint = xstep + i * FieldAngleHxMin
xendpoint = xstep + (i + 1) * FieldAngleHxMin
ystartpoint = ystep + i * FieldAngleHyMin
yendpoint = ystep + (i + 1) * FieldAngleHyMin
yield (xstartpoint, xendpoint, ystartpoint, yendpoint)
It will return lists of start and end coordinates every time it is called.
Related
I am working on a project related to charge distribution on the sphere and I decided to simulate the problem using vpython and Coulomb's law. I ran into an issue when I created a sphere because I am trying to evenly place out like 1000 points (charges) on the sphere and I can't seem to succeed, I have tried several ways but can't seem to make the points be on the sphere.
I defined an arbitrary value SOYDNR as a number to divide the diameter of the sphere into smaller segments. This would allow me to create a smaller rings of charges and fill out the surface of the spahre with charges. Then I make a list with 4 values that represent different parts of the radius to create the charge rings on the surface. Then I run into a problem and I am not sure how to deal with it. I have tried calculating the radius at those specific heights but yet I couldn't implement it. This is how it looks visually:![Sphere with charges on the surface].(https://i.stack.imgur.com/3N4x6.png) If anyone has any suggestions, I would be greatful, thanks!
SOYDNR = 10 #can be changed
SOYD = 2*radi/SOYDNR # strips of Y direction; initial height + SOYD until = 2*radi
theta = 0
dtheta1 = 2*pi/NCOS
y_list = np.arange(height - radi + SOYD, height, SOYD).tolist()
print(y_list)
for i in y_list:
while Nr<NCOS and theta<2*pi:
position = radi*vector(cos(theta),i*height/radi,sin(theta))
points_on_sphere = points_on_sphere + [sphere(pos=position, radius=radi/50, color=vector(1, 0, 0))]
Nr = Nr + 1
theta = theta + dtheta1
Nr = 0
theta = 0
I found a great way to do it, it creates a bunch of spheres in the area that is described by an if statement this is the code I am using for my simulation that creates the sphere with points on it.
def SOSE (radi, number_of_charges, height):
Charged_Sphere = sphere(pos=vector(0,height,0), radius=radi, color=vector(3.5, 3.5, 3.5), opacity=(0.2))
points_on_sphere = []
NCOS = number_of_charges
theta = 0
dtheta = 2*pi/NCOS
dr = radi/60
direcVector = vector(0, height, 0)
while theta<2*pi:
posvec1 = radi*vector(1-radi*random(),1-radi*random()/radi,1-radi*random())
posvec2 = radi*vector(1-radi*random(),-1+radi*random()/radi,1-radi*random())
if mag(posvec1)<radi and mag(posvec1)>(radi-dr):
posvec1 = posvec1+direcVector
points_on_sphere=points_on_sphere+[sphere(pos=posvec1,radius=radi/60,color=vector(1, 0, 0))]
theta=theta + dtheta
if mag(posvec2)<radi and mag(posvec2)>(radi-dr):
posvec2 = posvec2+direcVector
points_on_sphere=points_on_sphere+[sphere(pos=posvec2,radius=radi/60,color=vector(1, 0, 0))]
theta=theta + dtheta
This code can be edited to add more points and I have two if statements because I want to change the height at which the sphere is present, and if I have just one statement I only see half of the sphere. :)
What i want to do is visualize the data streamed from my xbox360 kinect in real time.
Essentially I am creating a point cloud from the data and then visualizing it. The code I have is working but it is really slow.
Basically this code adds a point cloud once it is received from the kinect. Whenever another point cloud is recieved, the previous one is removed and a new one is added.
Is there a better way to do this? Something that is much more responsive with higher frame rates
mat = rendering.Material()
mat.base_color = [
0,
0,
0, 1.0
]
mat.shader = "defaultLit"
pcl = o3d.geometry.PointCloud()
# This line recieves the data from the kinect in the format [x,y,z,r,g,b]
pcl.points = o3d.utility.Vector3dVector(kinect.streamCloud())
self.scene.scene.remove_geometry("kinect")
self.scene.scene.add_geometry("kinect", pcl, mat)
This is the code for streaming data from the kinect
def streamCloud():
depth = freenect.sync_get_depth()
pcl = np.zeros(shape=(307200,3))
c = 0
for i in range(480):
for j in range(640):
z = depth[0][i,j]
#z = 1.0 / (d[i,j] * -0.0030711016 + 3.3309495161)
#z = depth[0][i,j].astype(np.uint8)
#x = (i - cx) * z / fx
x = j
y = i
#y = (j - cy) * z / fy
pcl[c] = [x,y,z]
c = c+1
return pcl
Kinect generates about 300.000 points in every frame, too much data to draw. At 30 FPS this is 9.000.000 points in one second. The first thing you can do is downsample the cloud, you can use cloud.uniform_down_sample(every_k_points) to take points at every k points of the cloud. Or you can modify your read function, just change the loop of i and j to take pixels at every 10:
for i in range(0,480,10):
for j in range(0,640,10):
I'm writing a game that needs rotation of an polygon. I have an array of points. I have a rotation around (0,0) origin function:
x, y = xy
xx = x * math.cos(radians) + y * math.sin(radians)
yy = -x * math.sin(radians) + y * math.cos(radians)
return xx, yy
I tried rotating in place like this:
Figure = np.array(((0,8), (8,-8), (0,0), (-8,-8)))
for i in range(7):
for i, point in enumerate(Figure):
Figure[i] = rotate_origin_only(point, math.radians(33))
plt.fill(*list(zip(*Figure)))
plt.show()
But I get something broken
However, if I use a temporal list, like this:
Figure = np.array(((0,8), (8,-8), (0,0), (-8,-8)))
for i in range(80):
temp = []
for point in Figure:
temp.append(rotate_origin_only(point, math.radians(33 * i)))
plt.fill(*list(zip(*temp)))
plt.show()
Everything works as expected.
Why?
It seems like you miss a * i in the first algorithm.
In the first algorithm:
Figure[i] = rotate_origin_only(point, math.radians(33))
In the second:
temp.append(rotate_origin_only(point, math.radians(33 * i)))
Also, you are using i twice in the nested for-loop. This is a bad coding practice and can cause confusion. Commonly the second loop variable is called j, like so:
for i in range(7):
for j, point in enumerate(Figure):
It seems that rotating function or floats has too low accuracy to rotate already rotated form many times. I'll just rewrite my code to always rotate from base form.
So what I'm doing is creating sine waves with normally distributed amplitudes and frequencies - within given ranges. Eg 5V with 2-10Hz. So my attempt at this is to get my function with the given amplitude and frequency and then run it till the first turning point. From there I calculate the next function and add the y value of the previous functions turning point (as a shift) so it starts from that point. My problem is for some of the function changes I get straight lines rather than curves. If someone could tell me where I'm going wrong I'd appreciate it. Just to note, I use 8ms increments for each value to be plotted.
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import serial
newlist = np.zeros(1)
timesnew = np.zeros(1)
volts = []
def main(amp, lowerFreq, upperFreq, time, incr):
#Creates graph and saves it in newlist and timesnew
amt = np.int(time / incr)
list = []
timels = [] # np.zeros(amt+amt)
curtime = 0
loweramp = -amp
mu, sigma = 0, 1
ybefore = 0
rand = stats.truncnorm((loweramp - mu) / sigma, (amp - mu) / sigma, loc=mu, scale=sigma)
freqr = stats.truncnorm((lowerFreq - mu) / sigma, (upperFreq - mu) / sigma, loc=mu, scale=sigma)
i = 0
while i < amt:
# get amp
thisAmp = rand.rvs()
angleFreq = 2 * np.pi * freqr.rvs()
xtp = np.arccos(0) / angleFreq #x value of turning point
yval = thisAmp * np.sin(angleFreq * xtp)
# check that yvalue(voltage) is okay to be used - is within +-amp range
while not loweramp <= yval + ybefore <= amp:
thisAmp = rand.rvs()
angleFreq = 2 * np.pi * freqr.rvs()
xtp = np.arccos(0) / angleFreq
yval = thisAmp * np.sin(angleFreq * xtp)
# now add values to list
t = 0
while t <= xtp:
ynow = thisAmp * np.sin(angleFreq * t) + ybefore
# print ynow
list.append(ynow)
curtime += incr
timels.append(curtime)
t += incr
i += 1
print i
ybefore = ynow
newlist = np.asarray(list)
timesnew = np.asarray(timels)
#a = np.column_stack((timesnew, newlist))
np.savetxt("C://foo.csv", a, delimiter=";", fmt='%.10f')
addvolts()
plt.plot(timels,list)
plt.show()
if __name__ == "__main__":
main(5, 1, 2, 25, 0.00008)
EDIT:
Basically here is the problem, after the turning point the function does not seem to be sinusodial (the line seems to be linear) and I can't understand why or atleast how to get the functions to end up being more "curvy" and not "sharp" at the turning points.
I'm thinking maybe the function changes shouldn't be too different from the previous function but then I would lose the randomness. I'd like it to "look better" but I'm not sure how to achieve that unless I ran the frequencies in order. I'm trying to emulate a "whitenoise file" that was given to me as part of a job that I applied for - the whitenoise would be sent to a digital to analog converter and be used to test equipment. Obviously I didn't get the position BUT for knowledge purposes I want to complete this.
Here is the graph of the whitenoise file I was given - 700 mins long:
From the last pic the difference between mine and the given can be seen, I think I'm going to attempt to run each function for an entire period rather than a single turning point.
True white noise is completely random, so trying to emulate white noise using some kind of function already is contradictory.
If the file you have is really supposed to be white noise than it has already undergone some kind of filtering. You can of course do the same in your program: Create some truely random numbers and use a filter function to obtain some "smoothing" effect.
For example you can use a Hann filter and colvolute the random noise with the filter. This is shown below.
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
y = np.random.rand(1600)
win = scipy.signal.hann(15)
filtered = scipy.signal.convolve(y, win, mode='same') / sum(win)
fig, (ax, ax2) = plt.subplots(nrows=2, sharex=True, sharey=True)
ax.plot(y, linestyle="-", marker=".", lw=0.3, markersize=1, color="r", alpha=0.5)
ax.set_title("random noise")
ax2.plot(y, linestyle="", marker=".", color="r", markersize=1)
ax2.plot(filtered)
ax2.set_title("filterred")
plt.show()
You might want to zoom in to better see the effect or use different parameter for the filter window.
I have an image processing problem I'm currently solving in python, using numpy and scipy. Briefly, I have an image that I want to apply many local contractions to. My prototype code is working, and the final images look great. However, processing time has become a serious bottleneck in our application. Can you help me speed up my image processing code?
I've tried to boil down our code to the 'cartoon' version below. Profiling suggests that I'm spending most of my time on interpolation. Are there obvious ways to speed up execution?
import cProfile, pstats
import numpy
from scipy.ndimage import interpolation
def get_centered_subimage(
center_point, window_size, image):
x, y = numpy.round(center_point).astype(int)
xSl = slice(max(x-window_size-1, 0), x+window_size+2)
ySl = slice(max(y-window_size-1, 0), y+window_size+2)
subimage = image[xSl, ySl]
interpolation.shift(
subimage, shift=(x, y)-center_point, output=subimage)
return subimage[1:-1, 1:-1]
"""In real life, this is experimental data"""
im = numpy.zeros((1000, 1000), dtype=float)
"""In real life, this mask is a non-zero pattern"""
window_radius = 10
mask = numpy.zeros((2*window_radius+1, 2*window_radius+1), dtype=float)
"""The x, y coordinates in the output image"""
new_grid_x = numpy.linspace(0, im.shape[0]-1, 2*im.shape[0])
new_grid_y = numpy.linspace(0, im.shape[1]-1, 2*im.shape[1])
"""The grid we'll end up interpolating onto"""
grid_step_x = new_grid_x[1] - new_grid_x[0]
grid_step_y = new_grid_y[1] - new_grid_y[0]
subgrid_radius = numpy.floor(
(-1 + window_radius * 0.5 / grid_step_x,
-1 + window_radius * 0.5 / grid_step_y))
subgrid = (
window_radius + 2 * grid_step_x * numpy.arange(
-subgrid_radius[0], subgrid_radius[0] + 1),
window_radius + 2 * grid_step_y * numpy.arange(
-subgrid_radius[1], subgrid_radius[1] + 1))
subgrid_points = ((2*subgrid_radius[0] + 1) *
(2*subgrid_radius[1] + 1))
"""The coordinates of the set of spots we we want to contract. In real
life, this set is non-random:"""
numpy.random.seed(0)
num_points = 10000
center_points = numpy.random.random(2*num_points).reshape(num_points, 2)
center_points[:, 0] *= im.shape[0]
center_points[:, 1] *= im.shape[1]
"""The output image"""
final_image = numpy.zeros(
(new_grid_x.shape[0], new_grid_y.shape[0]), dtype=numpy.float)
def profile_me():
for m, cp in enumerate(center_points):
"""Take an image centered on each illumination point"""
spot_image = get_centered_subimage(
center_point=cp, window_size=window_radius, image=im)
if spot_image.shape != (2*window_radius+1, 2*window_radius+1):
continue #Skip to the next spot
"""Mask the image"""
masked_image = mask * spot_image
"""Resample the image"""
nearest_grid_index = numpy.round(
(cp - (new_grid_x[0], new_grid_y[0])) /
(grid_step_x, grid_step_y))
nearest_grid_point = (
(new_grid_x[0], new_grid_y[0]) +
(grid_step_x, grid_step_y) * nearest_grid_index)
new_coordinates = numpy.meshgrid(
subgrid[0] + 2 * (nearest_grid_point[0] - cp[0]),
subgrid[1] + 2 * (nearest_grid_point[1] - cp[1]))
resampled_image = interpolation.map_coordinates(
masked_image,
(new_coordinates[0].reshape(subgrid_points),
new_coordinates[1].reshape(subgrid_points))
).reshape(2*subgrid_radius[1]+1,
2*subgrid_radius[0]+1).T
"""Add the recentered image back to the scan grid"""
final_image[
nearest_grid_index[0]-subgrid_radius[0]:
nearest_grid_index[0]+subgrid_radius[0]+1,
nearest_grid_index[1]-subgrid_radius[1]:
nearest_grid_index[1]+subgrid_radius[1]+1,
] += resampled_image
cProfile.run('profile_me()', 'profile_results')
p = pstats.Stats('profile_results')
p.strip_dirs().sort_stats('cumulative').print_stats(10)
Vague explanation of what the code does:
We start with a pixellated 2D image, and a set of arbitrary (x, y) points in our image that don't generally fall on an integer grid. For each (x, y) point, I want to multiply the image by a small mask centered precisely on that point. Next we contract/expand the masked region by a finite amount, before finally adding this processed sub-image to a final image, which may not have the same pixel size as the original image. (Not my finest explanation. Ah well).
I'm pretty sure that, as you said, the bulk of the calculation time happens in interpolate.map_coordinates(…), which gets called once for every iteration on center_points, here 10,000 times. Generally, working with the numpy/scipy stack, you want the repetitive task over a large array to happen in native Numpy/Scipy functions -- i.e. in a C loop over homogeneous data -- as opposed to explicitely in Python.
One strategy that might speed up the interpolation, but that will also increase the amount of memory used, is :
First, fetch all the subimages (here named masked_image) in a 3-dimensional array (window_radius x window_radius x center_points.size)
Make a ufunc (read that, it's useful) that wraps the work that has to be done on each subimage, using numpy.frompyfunc, which should return another 3-dimensional array (subgrid_radius[0] x subgrid_radius[1] x center_points.size). In short, this creates a vectorized version of the python function, that can be broadcast element-wise on an array.
Build the final image by summing over the third dimension.
Hope that gets you closer to your goals!