how to find Y face of the cube in Maya with Python - python

sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)

import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v

If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.

With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)

Related

Maya API Python symmetry table with MRichSelection?

I was wondering if there is a way to access the symmetry table of the MRichSelection having as a result the positive, the seam and the negative side with the positive and the negative ordered by vertex id correspondence. ie: vertex id 15 is the symmetry correlated to vert id 350. They are both at index 5 in the positive and negative list.
I know I can achieve something similar using the filterXpand, but I believe the lists are not ordered in the way I can access the opposite vertex.
I don't know if you ever found a solution to this, but I will post mine for future TD's looking for a solution.
So let's assume you want to get the corresponding verts between left and right on the YZ plane. you have 2 different options. Using the MRichSelection to handle you symmetry table. Or calculate the vert yourself, by getting the smallest distance vector on the opposite side. Note: if you use the MRichSelection method, you will need to make sure that symmetry mode is enbaled in the viewport.
I will show both answers, so lets get started:
Also note: I will be calculating the YZ Plane, as mentioned earlier. So adjust to your liking if needed.
Solution 1(Calculating yourself):
#importing the OpenMaya Module
from maya.api import OpenMaya as om
#converting selected object into MObject and MFnMesh functionset
mSel=om.MSelectionList()
mSel.add(cmds.ls(sl=1)[0])
mObj=mSel.getDagPath(0)
mfnMesh=om.MFnMesh(mObj)
#getting our basePoints
baseShape = mfnMesh.getPoints()
#this function can be used to revert the object back to the baseShape
mfnMesh.setPoints(baseShape)
#getting l and r verts
mtol=0.02# this will be our mid tolerance, if the mesh is not completely symmetric on the mid
lVerts=[]#for storing left Verts
rVerts=[]#for storing right Verts
mVerts=[]#for storing mid Verts
corrVerts={} #for storing correspondign verts
for i in range(mfnMesh.numVertices): #iteratign through all the verts on the mesh
thisPoint = mfnMesh.getPoint(i) #getting current point position
if thisPoint.x>0+mtol: # if pointValue on x axis is bigger than 0+midTolerance
lVerts.append((i, thisPoint))#append to left vert storage list(i = vert index, thisPoint = vert Mpoint position)
elif thisPoint.x<0-mtol: #opposite of left vert calculation
rVerts.append((i, thisPoint))
else: #if none of the above, assign to mid verts
mVerts.append((i, thisPoint))
rVertspoints=[i for v,i in rVerts] #getting the vert mPoint positions of the right side
for vert, mp in lVerts: #going through our left points, unpacking our vert index and mPoint position()
nmp=om.MPoint(-mp.x, mp.y, mp.z) #storing the reversed mpoint of the left side vert
rp = mfnMesh.getClosestPoint(nmp)#getting the closest point on the mesh
if rp[0] in rVertspoints: #cheking if the point is in the right side
corrVerts[vert] = rVerts[rVertspoints.index(rp[0])][0] #adding it if it is true
else:#if it is not, calculate closest vert
#iterating through rVertspoints and find smallest distance
dList=[nmp.distanceTo(rVert) for rVert in rVertspoints]#distance list for each vert based on input point
mindist = min(dList)#getting the closest distance
corrVerts[vert] = rVerts[dList.index(mindist)][0]#adding the vert
#now the corrVerts will have stored the corresponding vertices from left to right
Solution 2(using MRichSelection):
#MAKE SURE SYMMETRY IN THE VIEWPORT IS TURNED ON TO WORK! (will also work with topological symmetry)
#importing the OpenMaya Module
from maya.api import OpenMaya as om
#converting selected object into MObject and MFnMesh functionset
mSel=om.MSelectionList()
mSel.add(cmds.ls(sl=1)[0])
mObj=mSel.getDagPath(0)
mfnMesh=om.MFnMesh(mObj)
#getting our basePoints
baseShape = mfnMesh.getPoints()
#this function can be used to revert the object back to the baseShape
mfnMesh.setPoints(baseShape)
#getting l and r verts
mtol=0.02# this will be our mid tolerance, if the mesh is not completely symmetric on the mid
lVerts=[]#for storing left Verts
corrVerts={} #for storing correspondign verts
for i in range(mfnMesh.numVertices): #iteratign through all the verts on the mesh
thisPoint = mfnMesh.getPoint(i) #getting current point position
if thisPoint.x>0+mtol: # if pointValue on x axis is bigger than 0+midTolerance
lVerts.append((i, thisPoint))#append to left vert storage list(i = vert index, thisPoint = vert Mpoint position)
#selecting our verts with symmetry on
SymSelection = cmds.select(["%s.vtx[%s]"%(mObj,i) for i,v in lVerts], sym=True)
#getting the rich selection. it will store the symmetry iformation for us
mRichBase = om.MGlobal.getRichSelection()
lCor = mRichBase.getSelection()#this will store our lSide verts as an MSelectionList
rCor = mRichBase.getSymmetry()#this will symmetry verts as an MSelectionList
mitL = om.MItSelectionList(lCor)#creating iterative lists so we can get the components
mitR = om.MItSelectionList(rCor)
while not mitL.isDone():#iterating through the left list
mitLComp = mitL.getComponent()#getting dag path and components of leftside
mitRComp = mitR.getComponent()#getting dag path and components of rightside
mitLCorVert = om.MItMeshVertex(mitLComp[0], mitLComp[1]) #creating our iterative vertex lists
mitRCorVert = om.MItMeshVertex(mitRComp[0], mitRComp[1])
while not mitLCorVert.isDone():#iterating through our verts
corrVerts[mitLCorVert.index()] = mitRCorVert.index()#adding corresponding verts to our dictionary
mitLCorVert.next()#go to next vert. needed to stop loop
mitRCorVert.next()#go to next vert. needed to stop loop
mitL.next()#go to next selection in list if more. needed to stop loop
mitR.next()#go to next selection in list if more. needed to stop loop
cmds.select(cl=1)#deseleting our verts
#now the corrVerts will have stored the corresponding vertices from left to right
Hope it will help you all, looking for a few solutions.
Cheers
Bjarke Rauff, Rigging TD.
The answer by #Bjarke Rauff was very helpful, wanted to add a note about speed.
MFnMesh.getClosestPoint() builds an octree to efficiently find the point, but it will do that on every call. A mesh with 100k points can take up to 45s to process.
Use a MMeshIntersector() to cache the data between lookups. This speeds up the table creation by 900x for 100k points to .05s.
mesh # MDagpath obj to poly
flip_matrix # MTransformMatrix to flop the point
itMesh = om.MItMeshPolygon(mesh)
mesh.extendToShape()
matrix = mesh.inclusiveMatrix()
node = mesh.node()
intersector = om.MMeshIntersector()
intersector.create(node, matrix)
if not (intersector.isCreated):
print("Failed to create mesh intersector")
return
flipped_ids={}
while not itMesh.isDone():
id = itMesh.index()
face_center = itMesh.center()
# flop the point across the axis
flipped_point = face_center*flip_matrix
MpointOnMesh = intersector.getClosestPoint(flipped_point)
if MpointOnMesh is not None:
# get face id property from MPointOnMesh
flipped_id = MpointOnMesh.face
flipped_ids[id] = flipped_id
else:
print("No intersection")
itMesh.next()
NOTE
I tried hash tables with a tuple of the point as the key, but the point positions had slight variations, even with rounding, which created different hashes.
I've tested the MRichSelection approach and it doesn't actually work consistently in practice. It seems like it works when you have a perfectly mirrored mesh, but that can't be assumed. The component lists are not necessarily in sync.

How to find face neighbours in Maya?

I have a problem where I need to select faces that are next to one pre-selected face.
This may be done easily but the problem is that when I get a neighbour face I need to know in which direction it is facing.
So now I am able to select faces which are connected with an edge but I can't get the face that is for example left or right from the first selected face. I have tried multiple approaches but can't find the solution.
I tried with:
pickWalk - cmds.pickWalk()- problem with this is that it's behavior can't be predicted, since it walks the mesh from the camera perspective.
polyInfo - cmds.polyInfo()- this is a very useful function and closest to the answer. In this approach I try to extract edges from a face and then see which are neighbours to that face with edgeToFace(). This works well but doesn't solve my problem. To elaborate, when polyInfo returns faces that share edges, it doesn't return them in a way that I can always know that edgesList[0] (for example) is the edge that points left or right. Hence if I use this on different faces the resulting face may be facing in a different direction in each case.
Hard way with many conversions from vertex to edge then to face etc. But still again it's the same problem where I don't know which edge is the top or left one.
conectedFaces()method who i call on selected face and it returns faces which are connected to first face,but still it`s the same problem,i dont know which face is facing which way.
To be clear I'm not using a pre-selected list of faces and checking them, but I need to know the faces without knowing or keeping their names somewhere. Does someone know a way that works with selection of faces?
To elaborate my question I made an image to make it clear:
As you can see from the example if there is selected face I need to select any of pointed faces, but that must be exact face I want to select. Other methods select all neighbour faces, but I need method that I can say "select right" and will select right one from first selected face.
This is one solution that would be fairly consistent under the rule that up/down/left/right is aligned with the mesh's transformation (local space), though could be world space too.
The first thing I would do is build a face relative coordinate system for every mesh face using the average face vertex position, face normal, and world space Y axis of the mesh's transformation. This involves a little vector math, so I will use the API to make this easier. This first part will make a coordinate system for each face that we will store into lists for future querying. See below.
from maya import OpenMaya, cmds
meshTransform = 'polySphere'
meshShape = cmds.listRelatives(meshTransform, c=True)[0]
meshMatrix = cmds.xform(meshTransform, q=True, ws=True, matrix=True)
primaryUp = OpenMaya.MVector(*meshMatrix[4:7])
# have a secondary up vector for faces that are facing the same way as the original up
secondaryUp = OpenMaya.MVector(*meshMatrix[8:11])
sel = OpenMaya.MSelectionList()
sel.add(meshShape)
meshObj = OpenMaya.MObject()
sel.getDependNode(0, meshObj)
meshPolyIt = OpenMaya.MItMeshPolygon(meshObj)
faceNeighbors = []
faceCoordinates = []
while not meshPolyIt.isDone():
normal = OpenMaya.MVector()
meshPolyIt.getNormal(normal)
# use the seconary up if the normal is facing the same direction as the object Y
up = primaryUp if (1 - abs(primaryUp * normal)) > 0.001 else secondaryUp
center = meshPolyIt.center()
faceArray = OpenMaya.MIntArray()
meshPolyIt.getConnectedFaces(faceArray)
meshPolyIt.next()
faceNeighbors.append([faceArray[i] for i in range(faceArray.length())])
xAxis = up ^ normal
yAxis = normal ^ xAxis
matrixList = [xAxis.x, xAxis.y, xAxis.z, 0,
yAxis.x, yAxis.y, yAxis.z, 0,
normal.x, normal.y, normal.z, 0,
center.x, center.y, center.z, 1]
faceMatrix = OpenMaya.MMatrix()
OpenMaya.MScriptUtil.createMatrixFromList(matrixList, faceMatrix)
faceCoordinates.append(faceMatrix)
These functions will look up and return which face is next to the one given in a particular direction (X and Y) relative to the face. This uses a dot product to see which face is more in that particular direction. This should work with any number of faces but it will only return one face that is in the most of that direction.
def getUpFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,1,0))
def getDownFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,-1,0))
def getRightFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(1,0,0))
def getLeftFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(-1,0,0))
def getDirectionalFace(faceIndex, axis):
faceMatrix = faceCoordinates[faceIndex]
closestDotProd = -1.0
nextFace = -1
for n in faceNeighbors[faceIndex]:
nMatrix = faceCoordinates[n] * faceMatrix.inverse()
nVector = OpenMaya.MVector(nMatrix(3,0), nMatrix(3,1), nMatrix(3,2))
dp = nVector * axis
if dp > closestDotProd:
closestDotProd = dp
nextFace = n
return nextFace
So you would call it like this:
getUpFace(123)
With the number being the face index you want to get the face that is "up" from it.
Give this a try and see if it satisfies your needs.
polyListComponentConversion
import pprint
init_face = cmds.ls(sl=True)
#get edges
edges = cmds.polyListComponentConversion(init_face, ff=True, te=True)
#get neighbour faces
faces = cmds.polyListComponentConversion(edges, fe=True, tf=True, bo=True)
# show neighbour faces
cmds.select(faces)
# print face normal of each neighbour face
pprint.pprint(cmds.ployInfo(faces,fn=True))
The easiest way of doing this is using Pymel's connectedFaces() on the MeshFace:
http://download.autodesk.com/us/maya/2011help/pymel/generated/classes/pymel.core.general/pymel.core.general.MeshFace.html
import pymel.core as pm
sel = pm.ls(sl=True)[0]
pm.select(sel.connectedFaces())

maya python iterating a big number of vertex

I am writing a script in python for maya to swap vertex position from one side to another.
Since I want the flipping to be topology based I am using the topological symmetry selection tool to find the vertex correspondence.
I managed to do that using filterExpand and xform.
The problem is that it is quite slow on a large poly count mesh and I was wondering how this could be done using openMaya instead.
import maya.cmds as cmds
def flipMesh():
sel=cmds.ls(sl=1)
axis={'x':0,'y':1,'z':2}
reverse=[1.0,1.0,1.0]
#quring the axtive symmetry axis
activeaxis=cmds.symmetricModelling(q=1, axis=1)
reverse[axis[activeaxis]]=-1.0
#getting the vertex count
verts=cmds.polyEvaluate(v=1)
#selecting all vertex
cmds.select(sel[0]+'.vtx[0:'+str(verts)+']')
#getting all the positive vertex
posit=cmds.filterExpand(sm=31,ex=1,smp=1)
seam=cmds.filterExpand(sm=31,ex=1,sms=1)
#swapping position on the positive side with the negative side
for pos in posit:
cmds.select(pos, sym=True)
neg=cmds.filterExpand(sm=31,ex=1,smn=1)
posT=cmds.xform(pos, q=1, t=1)
negT=cmds.xform(neg[0], q=1, t=1)
cmds.xform(pos,t=[a*b for a,b in zip(negT,reverse)])
cmds.xform(neg[0],t=[a*b for a,b in zip(posT,reverse)])
#inverting position on the seam
for each in seam:
seamP=cmds.xform(each, q=1, t=1)
seaminvP=[a*b for a,b in zip(seamP,reverse)]
cmds.xform(each, t=(seaminvP))
cmds.select(sel)
Thanks
Maurizio
You can try out OpenMaya.MFnMesh to get and set your vertices.
Here's an example that will simply mirror all points of a selected object along their z axis:
import maya.OpenMaya as OpenMaya
# Get selected object
mSelList = OpenMaya.MSelectionList()
OpenMaya.MGlobal.getActiveSelectionList(mSelList)
sel = OpenMaya.MItSelectionList(mSelList)
path = OpenMaya.MDagPath()
sel.getDagPath(path)
# Attach to MFnMesh
MFnMesh = OpenMaya.MFnMesh(path)
# Create empty point array to store new points
newPointArray = OpenMaya.MPointArray()
for i in range( MFnMesh.numVertices() ):
# Create a point, and mirror it
newPoint = OpenMaya.MPoint()
MFnMesh.getPoint(i, newPoint)
newPoint.z = -newPoint.z
newPointArray.append(newPoint)
# Set new points to mesh all at once
MFnMesh.setPoints(newPointArray)
Instead of moving them one at at time you can use MFnMesh.setPoints to set them all at once. You'll have to adapt your logic to this, but hopefully this will help you out manipulating with Maya's api. I should also note that you would also have to resolve normals afterwards.

Detect loops/intersections in matplotlib scatter plot

At some point in my work, I came up with that kind of scatter plot.
I would like for my script to be able to detect the fact that it "loops" and to give me the point (or an approximation thereof) where it does so : for instance, in this case it would be about [0.2,0.1].
I tried to play around with some representative quantities of my points, like norm and/or argument, like in the following piece of code.
import numpy as np
x,y = np.genfromtxt('points.dat',unpack=True)
norm = np.sqrt(x**2+y**2)
arg = np.arctan2(y,x)
left,right = np.meshgrid(norm,norm)
norm_diff = np.fabs(left - right)
mask = norm_diff == 0.
norm_diff_ma = np.ma.masked_array(norm_diff,mask)
left,right = np.meshgrid(arg,arg)
arg_diff = np.fabs(left - right)
mask = arg_diff == 0.
arg_diff_ma = np.ma.masked_array(arg_diff,mask)
list_of_indices = np.ma.where((norm_diff_ma<1.0e-04)*(arg_diff_ma<1.0e-04))
But, it does not work as intended : might be because the dataset contains too many points and the distance between two aligned points is anyhow of the same order of magnitude as the distance between the points in the "loop cluster" ...
I was thinking about detecting clusters, or maybe even detecting lines in the scatter plot and then see if there are any intersections between any two lines, but I am afraid my skills in image processing only go so far.
Is there any algorithm, any trick that any of you can think about would work here ?
A representative data sample can be found here.
Edit 08/13/2015 16h18 : after the short discussion with #DrBwts I took a closer look at the data I obtained after a pyplot.contour() call. Using the following routine to extract all the vertices :
def contour_points(contour, steps=1):
try:
loc_arr = np.row_stack([path.interpolated(steps).vertices for linecol in contour.collections for path in linecol.get_paths()])
except ValueError:
loc_arr = np.empty((0,2))
finally:
return loc_arr
y,x = contour_points(CS,steps=1).T
it turns out the points of coordinates (x,y) are ordered, in the sense where a call to pyplot.plot() connects the dots correctly.

Numpy manipulating array of True values dependent on x/y index

So I have an array (it's large - 2048x2048), and I would like to do some element wise operations dependent on where they are. I'm very confused how to do this (I was told not to use for loops, and when I tried that my IDE froze and it was going really slow).
Onto the question:
h = aperatureimage
h[:,:] = 0
indices = np.where(aperatureimage>1)
for True in h:
h[index] = np.exp(1j*k*z)*np.exp(1j*k*(x**2+y**2)/(2*z))/(1j*wave*z)
So I have an index, which is (I'm assuming here) essentially a 'cropped' version of my larger aperatureimage array. *Note: Aperature image is a grayscale image converted to an array, it has a shape or text on it, and I would like to find all the 'white' regions of the aperature and perform my operation.
How can I access the individual x/y values of index which will allow me to perform my exponential operation? When I try index[:,None], leads to the program spitting out 'ValueError: broadcast dimensions too large'. I also get array is not broadcastable to correct shape. Any help would be appreciated!
One more clarification: x and y are the only values I would like to change (essentially the points in my array where there is white, z, k, and whatever else are defined previously).
EDIT:
I'm not sure the code I posted above is correct, it returns two empty arrays. When I do this though
index = (aperatureimage==1)
print len(index)
Actually, nothing I've done so far works correctly. I have a 2048x2048 image with a 128x128 white square in the middle of it. I would like to convert this image to an array, look through all the values and determine the index values (x,y) where the array is not black (I only have white/black, bilevel image didn't work for me). I would then like to take all the values (x,y) where the array is not 0, and multiply them by the h[index] value listed above.
I can post more information if necessary. If you can't tell, I'm stuck.
EDIT2: Here's some code that might help - I think I have the problem above solved (I can now access members of the array and perform operations on them). But - for some reason the Fx values in my for loop never increase, it loops Fy forever....
import sys, os
from scipy.signal import *
import numpy as np
import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance, ImageColor
def createImage(aperature, type):
imsize = aperature*8
middle = imsize/2
im = Image.new("L", (imsize,imsize))
draw = ImageDraw.Draw(im)
box = ((middle-aperature/2, middle-aperature/2), (middle+aperature/2, middle+aperature/2))
import sys, os
from scipy.signal import *
import numpy as np
import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance, ImageColor
def createImage(aperature, type):
imsize = aperature*8 #Add 0 padding to make it nice
middle = imsize/2 # The middle (physical 0) of our image will be the imagesize/2
im = Image.new("L", (imsize,imsize)) #Make a grayscale image with imsize*imsize pixels
draw = ImageDraw.Draw(im) #Create a new draw method
box = ((middle-aperature/2, middle-aperature/2), (middle+aperature/2, middle+aperature/2)) #Bounding box for aperature
if type == 'Rectangle':
draw.rectangle(box, fill = 'white') #Draw rectangle in the box and color it white
del draw
return im, middle
def Diffraction(aperaturediameter = 1, type = 'Rectangle', z = 2000000, wave = .001):
# Constants
deltaF = 1/8 # Image will be 8mm wide
z = 1/3.
wave = 0.001
k = 2*pi/wave
# Now let's get to work
aperature = aperaturediameter * 128 # Aperaturediameter (in mm) to some pixels
im, middle = createImage(aperature, type) #Create an image depending on type of aperature
aperaturearray = np.array(im) # Turn image into numpy array
# Fourier Transform of Aperature
Ta = np.fft.fftshift(np.fft.fft2(aperaturearray))/(len(aperaturearray))
# Transforming and calculating of Transfer Function Method
H = aperaturearray.copy() # Copy image so H (transfer function) has the same dimensions as aperaturearray
H[:,:] = 0 # Set H to 0
U = aperaturearray.copy()
U[:,:] = 0
index = np.nonzero(aperaturearray) # Find nonzero elements of aperaturearray
H[index[0],index[1]] = np.exp(1j*k*z)*np.exp(-1j*k*wave*z*((index[0]-middle)**2+(index[1]-middle)**2)) # Free space transfer for ap array
Utfm = abs(np.fft.fftshift(np.fft.ifft2(Ta*H))) # Compute intensity at distance z
# Fourier Integral Method
apindex = np.nonzero(aperaturearray)
U[index[0],index[1]] = aperaturearray[index[0],index[1]] * np.exp(1j*k*((index[0]-middle)**2+(index[1]-middle)**2)/(2*z))
Ufim = abs(np.fft.fftshift(np.fft.fft2(U))/len(U))
# Save image
fim = Image.fromarray(np.uint8(Ufim))
fim.save("PATH\Fim.jpg")
ftfm = Image.fromarray(np.uint8(Utfm))
ftfm.save("PATH\FTFM.jpg")
print "that may have worked..."
return
if __name__ == '__main__':
Diffraction()
You'll need numpy, scipy, and PIL to work with this code.
When I run this, it goes through the code, but there is no data in them (everything is black). Now I have a real problem here as I don't entirely understand the math I'm doing (this is for HW), and I don't have a firm grasp on Python.
U[index[0],index[1]] = aperaturearray[index[0],index[1]] * np.exp(1j*k*((index[0]-middle)**2+(index[1]-middle)**2)/(2*z))
Should that line work for performing elementwise calculations on my array?
Could you perhaps post a minimal, yet complete, example? One that we can copy/paste and run ourselves?
In the meantime, in the first two lines of your current example:
h = aperatureimage
h[:,:] = 0
you set both 'aperatureimage' and 'h' to 0. That's probably not what you intended. You might want to consider:
h = aperatureimage.copy()
This generates a copy of aperatureimage while your code simply points h to the same array as aperatureimage. So changing one changes the other.
Be aware, copying very large arrays might cost you more memory then you would prefer.
What I think you are trying to do is this:
import numpy as np
N = 2048
M = 64
a = np.zeros((N, N))
a[N/2-M:N/2+M,N/2-M:N/2+M]=1
x,y = np.meshgrid(np.linspace(0, 1, N), np.linspace(0, 1, N))
b = a.copy()
indices = np.where(a>0)
b[indices] = np.exp(x[indices]**2+y[indices]**2)
Or something similar. This, in any case, sets some values in 'b' based on the x/y coordinates where 'a' is bigger than 0. Try visualizing it with imshow. Good luck!
Concerning the edit
You should normalize your output so it fits in the 8 bit integer. Currently, one of your arrays has a maximum value much larger than 255 and one has a maximum much smaller. Try this instead:
fim = Image.fromarray(np.uint8(255*Ufim/np.amax(Ufim)))
fim.save("PATH\Fim.jpg")
ftfm = Image.fromarray(np.uint8(255*Utfm/np.amax(Utfm)))
ftfm.save("PATH\FTFM.jpg")
Also consider np.zeros_like() instead of copying and clearing H and U.
Finally, I personally very much like working with ipython when developing something like this. If you put the code in your Diffraction function in the top level of your script (in place of 'if __ name __ &c.'), then you can access the variables directly from ipython. A quick command like np.amax(Utfm) would show you that there are indeed values!=0. imshow() is always nice to look at matrices.

Categories