I'm showing some strange discrepancies between my declared camera position (scene.camera.pos), and the actual camera position. I can't believe this feature is just broken, am I missing something here?
Here's the code, and the output shown below
GlowScript 3.1 VPython
cube = box(pos=vector(0, 0, 0), size=vector(1,1,1), color=color.red, texture=textures.rough)
scene.lights = [distant_light(direction=vector(0.4226, 0, -0.9063),color=color.gray(0.7)),distant_light(direction=vector(0.4226, 0, -0.9063),color=color.gray(0.7))]
scene.background = color.gray(0.8)
scene.camera.pos = vector(3,3,-3)
scene.camera.axis = cube.pos - scene.camera.pos
#scene.forward=cube.pos
#scene.camera.center=cube.pos
#scene.camera.fov = (pi/180)*10
#scene.camera.axis = vector(0, 0, 0)
#scene.up = vector(0,1,0)
while True:
rate(0.5)
scene.append_to_title(scene.camera.pos)
#scene.camera.rotate(angle=0.05, axis=vec(0,0,1), origin=vec(0,10,0))
#scene.capture("woah")
I think I see the problem. There is a conflict between you manipulating the camera and the default scene.autoscale = True. If you set scene.autoscale = False before manipulating the camera, I think you'll find that the program behaves as expected. At the very least, this implies a need for the camera documentation to point out this conflict.
Finally checked back in on this and it's working properly now.
Related
I'm having a problem with my text and radial stimuli in Psychopy.It seems like all of my text stim are shifted to the left. Despite the fact that the ($8, 0%) is given the same x coordinates as the radial stimuli they are consistently to the left of the radial stimuli and I'm not sure why
Lot_a_win=visual.RadialStim(win=win,units="pix",name='Lot', color=col_code,opacity=1,
angularCycles = 0, radialCycles = 0, radialPhase = 0.5, colorSpace = 'rgb',
ori= -90.0,pos=(lot_pos,0), size=(300,300),visibleWedge=(0.0, shade))
Lot_a_lose= visual.RadialStim( win=win, name='rad2', color=col_code,opacity=0.5,
angularCycles = 0, radialCycles = 0, radialPhase = 0.5, colorSpace = 'rgb',
ori= 45.0, pos=(lot_pos,0), size=(300,300))
Lot_a_lose.draw()
Lot_a_win.draw()
SureMoney=visual.TextStim(win=win,text="$ %s"%(sure_m),pos=sure_pos,bold=True,units='pix')
SureMoney.draw()
Lot_per=visual.TextStim(win=win,text="%s %%"%(lot_p),pos=(lot_pos,-50),bold=True,units='pix')
Lot_Money=visual.TextStim(win=win,text="$ %s"%(lot_m),pos=(lot_pos,50),bold=True,units='pix')
Lot_per.draw()
Lot_Money.draw()
My guess is that you've got pyglet 1.4 installed and the text positioning system has changed. PsychoPy will support the changed system as best we can but, in the meantime I'd suggest you downgrade pyglet to 1.3.x
I = cv.imread('test.png')
IB = I[:,:,0]
coords = corner_peaks(corner_harris(IB))
patch = IB[coords[1,0]-2:coords[1,0]+3,coords[1,1]-2:coords[1,1]+3]
print(patch)
if the interest point is not around boundary then it is working ok, but at boundary it is not working properly(patch is having smaller size). I guess it requires padding at boundaries.(not sure)
Is there an easier method to do this? or to make it work at boundaries too?
Edit: i added following , it seems to work. will update if some problem occurs. If this can be solved with some inbuilt function then it would be good, kindly tell if it can be done using some function.
coords = coords +2
IBpadded = cv.copyMakeBorder(IB, 2, 2, 2, 2, cv.BORDER_REFLECT)
I have a problem where I need to select faces that are next to one pre-selected face.
This may be done easily but the problem is that when I get a neighbour face I need to know in which direction it is facing.
So now I am able to select faces which are connected with an edge but I can't get the face that is for example left or right from the first selected face. I have tried multiple approaches but can't find the solution.
I tried with:
pickWalk - cmds.pickWalk()- problem with this is that it's behavior can't be predicted, since it walks the mesh from the camera perspective.
polyInfo - cmds.polyInfo()- this is a very useful function and closest to the answer. In this approach I try to extract edges from a face and then see which are neighbours to that face with edgeToFace(). This works well but doesn't solve my problem. To elaborate, when polyInfo returns faces that share edges, it doesn't return them in a way that I can always know that edgesList[0] (for example) is the edge that points left or right. Hence if I use this on different faces the resulting face may be facing in a different direction in each case.
Hard way with many conversions from vertex to edge then to face etc. But still again it's the same problem where I don't know which edge is the top or left one.
conectedFaces()method who i call on selected face and it returns faces which are connected to first face,but still it`s the same problem,i dont know which face is facing which way.
To be clear I'm not using a pre-selected list of faces and checking them, but I need to know the faces without knowing or keeping their names somewhere. Does someone know a way that works with selection of faces?
To elaborate my question I made an image to make it clear:
As you can see from the example if there is selected face I need to select any of pointed faces, but that must be exact face I want to select. Other methods select all neighbour faces, but I need method that I can say "select right" and will select right one from first selected face.
This is one solution that would be fairly consistent under the rule that up/down/left/right is aligned with the mesh's transformation (local space), though could be world space too.
The first thing I would do is build a face relative coordinate system for every mesh face using the average face vertex position, face normal, and world space Y axis of the mesh's transformation. This involves a little vector math, so I will use the API to make this easier. This first part will make a coordinate system for each face that we will store into lists for future querying. See below.
from maya import OpenMaya, cmds
meshTransform = 'polySphere'
meshShape = cmds.listRelatives(meshTransform, c=True)[0]
meshMatrix = cmds.xform(meshTransform, q=True, ws=True, matrix=True)
primaryUp = OpenMaya.MVector(*meshMatrix[4:7])
# have a secondary up vector for faces that are facing the same way as the original up
secondaryUp = OpenMaya.MVector(*meshMatrix[8:11])
sel = OpenMaya.MSelectionList()
sel.add(meshShape)
meshObj = OpenMaya.MObject()
sel.getDependNode(0, meshObj)
meshPolyIt = OpenMaya.MItMeshPolygon(meshObj)
faceNeighbors = []
faceCoordinates = []
while not meshPolyIt.isDone():
normal = OpenMaya.MVector()
meshPolyIt.getNormal(normal)
# use the seconary up if the normal is facing the same direction as the object Y
up = primaryUp if (1 - abs(primaryUp * normal)) > 0.001 else secondaryUp
center = meshPolyIt.center()
faceArray = OpenMaya.MIntArray()
meshPolyIt.getConnectedFaces(faceArray)
meshPolyIt.next()
faceNeighbors.append([faceArray[i] for i in range(faceArray.length())])
xAxis = up ^ normal
yAxis = normal ^ xAxis
matrixList = [xAxis.x, xAxis.y, xAxis.z, 0,
yAxis.x, yAxis.y, yAxis.z, 0,
normal.x, normal.y, normal.z, 0,
center.x, center.y, center.z, 1]
faceMatrix = OpenMaya.MMatrix()
OpenMaya.MScriptUtil.createMatrixFromList(matrixList, faceMatrix)
faceCoordinates.append(faceMatrix)
These functions will look up and return which face is next to the one given in a particular direction (X and Y) relative to the face. This uses a dot product to see which face is more in that particular direction. This should work with any number of faces but it will only return one face that is in the most of that direction.
def getUpFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,1,0))
def getDownFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,-1,0))
def getRightFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(1,0,0))
def getLeftFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(-1,0,0))
def getDirectionalFace(faceIndex, axis):
faceMatrix = faceCoordinates[faceIndex]
closestDotProd = -1.0
nextFace = -1
for n in faceNeighbors[faceIndex]:
nMatrix = faceCoordinates[n] * faceMatrix.inverse()
nVector = OpenMaya.MVector(nMatrix(3,0), nMatrix(3,1), nMatrix(3,2))
dp = nVector * axis
if dp > closestDotProd:
closestDotProd = dp
nextFace = n
return nextFace
So you would call it like this:
getUpFace(123)
With the number being the face index you want to get the face that is "up" from it.
Give this a try and see if it satisfies your needs.
polyListComponentConversion
import pprint
init_face = cmds.ls(sl=True)
#get edges
edges = cmds.polyListComponentConversion(init_face, ff=True, te=True)
#get neighbour faces
faces = cmds.polyListComponentConversion(edges, fe=True, tf=True, bo=True)
# show neighbour faces
cmds.select(faces)
# print face normal of each neighbour face
pprint.pprint(cmds.ployInfo(faces,fn=True))
The easiest way of doing this is using Pymel's connectedFaces() on the MeshFace:
http://download.autodesk.com/us/maya/2011help/pymel/generated/classes/pymel.core.general/pymel.core.general.MeshFace.html
import pymel.core as pm
sel = pm.ls(sl=True)[0]
pm.select(sel.connectedFaces())
I am creating a screenshot module using only pure python (ctypes), no big lib like win32, wx, QT, ... It has to manage multi-screens (what PIL and Pillow cannot).
Where I am blocking is when calling CreateDCFromHandle, ctypes.windll.gdi32 does not know this function. I looked at win32 source code to being inspired, but useless. As said in comment, this function does not exist in the MSDN, so what changes should I apply to take in consideration other screens?
This is the code which works for the primary monitor, but not for others: source code.
It is blocking at the line 35. I tried a lot of combinations, looking for answers here and on others websites. But nothing functional for me ... It is just a screenshot!
Do you have clues?
Thanks in advance :)
Edit, I found my mystake! This is the code that works:
srcdc = ctypes.windll.user32.GetWindowDC(0)
memdc = ctypes.windll.gdi32.CreateCompatibleDC(srcdc)
bmp = ctypes.windll.gdi32.CreateCompatibleBitmap(srcdc, width, height)
ctypes.windll.gdi32.SelectObject(memdc, bmp)
ctypes.windll.gdi32.BitBlt(memdc, 0, 0, width, height, srcdc, left, top, SRCCOPY)
bmp_header = pack('LHHHH', calcsize('LHHHH'), width, height, 1, 24)
c_bmp_header = c_buffer(bmp_header)
c_bits = c_buffer(' ' * (height * ((width * 3 + 3) & -4)))
got_bits = ctypes.windll.gdi32.GetDIBits(memdc, bmp, 0, height,
c_bits, c_bmp_header, DIB_RGB_COLORS)
# Here, got_bits should be equal to height to tell you all goes well.
French article with full explanations : Windows : capture d'écran
Edit, I found my mystake! This is the code that works:
srcdc = ctypes.windll.user32.GetWindowDC(0)
memdc = ctypes.windll.gdi32.CreateCompatibleDC(srcdc)
bmp = ctypes.windll.gdi32.CreateCompatibleBitmap(srcdc, width, height)
ctypes.windll.gdi32.SelectObject(memdc, bmp)
ctypes.windll.gdi32.BitBlt(memdc, 0, 0, width, height, srcdc, left, top, SRCCOPY)
bmp_header = pack('LHHHH', calcsize('LHHHH'), width, height, 1, 24)
c_bmp_header = c_buffer(bmp_header)
c_bits = c_buffer(' ' * (height * ((width * 3 + 3) & -4)))
got_bits = ctypes.windll.gdi32.GetDIBits(
memdc, bmp, 0, height, c_bits, c_bmp_header, DIB_RGB_COLORS)
# Here, got_bits should be equal to height to tell you all goes well.
This isn't a Windows API function. You will need a combination of EnumDisplayDevices and CreateDC. Be aware that you must append "A" or "W" to the names of the functions depending on if you want to use ANSI strings or Unicode (widechar) strings.
Looking at the source for pywin32, CreateDCFromHandle is a fabrication. It does not exist in the Windows API; it is simply a bridge converting a Windows API thing into a pywin32 thing.
Since you're using ctypes rather than pywin32, no conversion is necessary; see if you can skip that step:
hwin = user.GetDesktopWindow()
hwindc = user.GetWindowDC(monitor['hmon'])
memdc = gdi.CreateCompatibleDC(hwindc)
When you're trying to do some native-Windows API thing with ctypes in Python, I find it more helpful to look at existing C code which already uses the Windows API rather than using Python code that uses a wrapper around it.
I want to reserve some space on the screen for my Gtk application written in Python. I've wrote this function:
import xcb, xcb.xproto
import struct
def reserve_space(xid, data):
connection = xcb.connect()
atom_cookie = connection.core.InternAtom(True, len("_NET_WM_STRUT_PARTIAL"),
"_NET_WM_STRUT_PARTIAL")
type_cookie = connection.core.InternAtom(True, len("CARDINAL"), "CARDINAL")
atom = atom_cookie.reply().atom
atom_type = type_cookie.reply().atom
data_p = struct.pack("I I I I I I I I I I I I", *data)
strat_cookie = connection.core.ChangeProperty(xcb.xproto.PropMode.Replace, xid,
atom, xcb.xproto.Atom.CARDINAL, 32, len(data_p), data_p)
connection.flush()
It's call looks like this:
utils.reserve_space(xid, [0, 60, 0, 0, 0, 0, 24, 767, 0, 0, 0, 0])
Unfortunately, it doesn't work. Where is an error in my code?
UPD:
Here is my xprop output. My WM is Compiz.
I have uploaded a gist that demonstrates how to specify a strut across the top of the current monitor for what might be a task-bar. It may help explain some of this.
The gist of my gist is below:
window = gtk.Window()
window.show_all()
topw = window.get_toplevel().window
topw.property_change("_NET_WM_STRUT","CARDINAL",32,gtk.gdk.PROP_MODE_REPLACE,
[0, 0, bar_size, 0])
topw.property_change("_NET_WM_STRUT_PARTIAL","CARDINAL",32,gtk.gdk.PROP_MODE_REPLACE,
[0, 0, bar_size, 0, 0, 0, 0, 0, x, x+width, 0, 0])
I found the strut arguments confusing at first, so here is an explanation that I hope is clearer:
we set _NET_WM_STRUT, the older mechanism as well as _NET_WM_STRUT_PARTIAL but window managers ignore the former if they support the latter. The numbers in the array are as follows:
0, 0, bar_size, 0 are the number of pixels to reserve along each edge of the screen given in the order left, right, top, bottom. Here the size of the bar is reserved at the top of the screen and the other edges are left alone.
_NET_WM_STRUT_PARTIAL also supplies a further four pairs, each being a start and end position for the strut (they don't need to occupy the entire edge).
In the example, we set the top start to the current monitor's x co-ordinate and the top-end to the same value plus that monitor's width. The net result is that space is reserved only on the current monitor.
Note that co-ordinates are specified relative to the screen (i.e. all monitors together).
(see the referenced gist for the full context)
Changing to using ChangePropertyChecked(), and then checking the result gives a BadLength exception.
I think the bug here is that the ChangeProperty() parameter data_len is the number of elements of the size given by format , not the number of bytes, in the property data data.
Slightly modified code which works for me:
def reserve_space(xid, data):
connection = xcb.connect()
atom_cookie = connection.core.InternAtom(False, len("_NET_WM_STRUT_PARTIAL"),
"_NET_WM_STRUT_PARTIAL")
atom = atom_cookie.reply().atom
data_p = struct.pack("12I", *data)
strat_cookie = connection.core.ChangePropertyChecked(xcb.xproto.PropMode.Replace, xid,
atom, xcb.xproto.Atom.CARDINAL, 32, len(data_p)/4, data_p)
strat_cookie.check()
connection.flush()