I need to simulate a focus of expansion using PsychoPy's RDK functionality.
I have the following code so far. However this only created a RDK that moves in a certain direction.
from psychopy import visual, event, core
win = visual.Window([1000,1000], rgb=(255,255,255), fullscr=False)
fixSpot = visual.GratingStim(win,tex=None, mask="gauss", size=(0.05,0.05),color='black')
rdk = visual.DotStim(win, units='', nDots=1000, coherence=1.0,
fieldPos=(0,0),
fieldSize=(1,1),
fieldShape='sqr', dotSize=6.0,
dotLife=150, dir=0, speed=0.01,
rgb=None, color=(0,0,0),
colorSpace='rgb255', opacity=1.0,
contrast=1.0, depth=0, element=None,
signalDots='different',
noiseDots='direction', name='',
autoLog=True)
stop = False
while stop == False:
fixSpot.draw()
rdk.draw()
win.flip()
if event.getKeys("a"):
win.close()
stop = True
I need to create an RDK where the dots move away from a specific position in the window.
i.e.
I tried changing parameters however I cant mimic the desired functionality.
I also looked through and searched the psychopy documentation, however there was no mention of 'focus of expansion'.
Is there any way to do this using PsychoPy? If not, what is the best alternative?
There's a demo called starField in PsychoPy Coder view. That has random speeds coming from a single point (traditional "simulations" of space travel have used this to indicate stars being at different distances). You should be able to work out how to give all dots the same speed.
The demo uses ElementArrayStim rather than DotStim because DotStim has its own methods to control dot motions and I don't think you want that.
Fun question. A way to do it is:
from psychopy import visual
win = visual.Window()
stim = visual.DotStim(win, nDots=50, dotLife=60, speed=0) # a non-moving DotStim
for frame in range(100):
stim._dotsXY *= 1.02 # accelerating X-Y expansion
#stim.dotsXY *= stim.dotsXY * [1.02, 1.05] # faster acceleration in y-direction
stim.draw()
win.flip()
This goes "behind the scenes" and manipulates the internal attribute called visual.DotStim._dotsXY. It is just a 2 x nDots numpy array like this:
print stim._dotsXY # look at coordinates
[[ 0.02306344 -0.33223609]
[ 0.30596334 -0.0300994 ]
[-0.10165172 -0.08354835]
[ 0.21854653 -0.07456332]
[-0.39262477 -0.21594382]
...etc
... on which you do all sorts of operations. I can't quite figure out how to do constant-speed expansion in a neat way.
Related
I am running into a few issues using the GRASS GIS module r.accumulate while running it in Python. I use the module to calculate sub watersheds for over 7000 measurement points. Unfortunately, the output of the algorithm is nested. So all sub watersheds are overlapping each other. Running the r.accumulate sub watershed module takes roughly 2 minutes for either one or multiple points, I assume the bottleneck is loading the direction raster.
I was wondering if there is an unnested variant in GRASS GIS available and if not, how to overcome the bottleneck of loading the direction raster every time you call the module accumulate. Below is a code snippet of what I have tried so far (resulting in a nested variant):
locations = VectorTopo('locations',mapset='PERMANENT')
locations.open('r')
points=[]
for i in range(len(locations)):
points.append(locations.read(i+1).coords())
for j in range(0,len(points),255):
output = "watershed_batch_{}#Watersheds".format(j)
gs.run_command("r.accumulate", direction='direction#PERMANENT', subwatershed=output,overwrite=True, flags = "r", coordinates = points[j:j+255])
gs.run_command('r.stats', flags="ac", input=output, output="stat_batch_{}.csv".format(j),overwrite=True)
Any thoughts or ideas are very welcome.
I already replied to your email, but now I see your Python code and better understand your "overlapping" issue. In this case, you don't want to feed individual outlet points one at a time. You can just run
r.accumulate direction=direction#PERMANENT subwatershed=output outlet=locations
r.accumulate's outlet option can handle multiple outlets and will generate non-overlapping subwatersheds.
The answer provided via email was very usefull. To share the answer I have provided the code below to do an unnested basin subwatershed calculation. A small remark: I had to feed the coordinates in batches as the list of coordinates exceeded the max length of characters windows could handle.
Thanks to #Huidae Cho, the call to R.accumulate to calculate subwatersheds and longest flow path can now be done in one call instead of two seperate calls.
The output are unnested basins. Where the largers subwatersheds are seperated from the smaller subbasins instead of being clipped up into the smaller basins. This had to with the fact that the output is the raster format, where each cell can only represent one basin.
gs.run_command('g.mapset',mapset='Watersheds')
gs.run_command('g.region', rast='direction#PERMANENT')
StationIds = list(gs.vector.vector_db_select('locations_snapped_new', columns = 'StationId')["values"].values())
XY = list(gs.vector.vector_db_select('locations_snapped_new', columns = 'x_real,y_real')["values"].values())
for j in range(0,len(XY),255):
output_ws = "watershed_batch_{}#Watersheds".format(j)
output_lfp = "lfp_batch_{}#Watersheds".format(j)
output_lfp_unique = "lfp_unique_batch_{}#Watersheds".format(j)
gs.run_command("r.accumulate", direction='direction#PERMANENT', subwatershed=output_ws, flags = "ar", coordinates = XY[j:j+255],lfp=output_lfp, id=StationIds[j:j+255], id_column="id",overwrite=True)
gs.run_command("r.to.vect", input=output_ws, output=output_ws, type="area", overwrite=True)
gs.run_command("v.extract", input=output_lfp, where="1 order by id", output=output_lfp_unique,overwrite=True)
To export the unique watersheds I used the following code. I had to transform the longest_flow_path to point as some of the longest_flow_paths intersected with the corner boundary of the watershed next to it. Some longest flow paths were thus not fully within the subwatershed. See image below where the red line (longest flow path) touches the subwatershed boundary:
enter image description here
gs.run_command('g.mapset',mapset='Watersheds')
lfps= gs.list_grouped('vect', pattern='lfp_unique_*')['Watersheds']
ws= gs.list_grouped('vect', pattern='watershed_batch*')['Watersheds']
files=np.stack((lfps,ws)).T
#print(files)
for file in files:
print(file)
ids = list(gs.vector.vector_db_select(file[0],columns="id")["values"].values())
for idx in ids:
idx=int(idx[0])
expr = f'id="{idx}"'
gs.run_command('v.extract',input=file[0], where=expr, output="tmp_lfp",overwrite=True)
gs.run_command("v.to.points", input="tmp_lfp", output="tmp_lfp_points", use="vertex", overwrite=True)
gs.run_command('v.select', ainput= file[1], binput = "tmp_lfp_points", output="tmp_subwatersheds", overwrite=True)
gs.run_command('v.db.update', map = "tmp_subwatersheds",col= "value", value=idx)
gs.run_command('g.mapset',mapset='vector_out')
gs.run_command('v.dissolve',input= "tmp_subwatersheds#Watersheds", output="subwatersheds_{}".format(idx),col="value",overwrite=True)
gs.run_command('g.mapset',mapset='Watersheds')
gs.run_command("g.remove", flags="f", type="vector",name="tmp_lfp,tmp_subwatersheds")
I ended up with a vector for each subwatershed
I've been stuck in figuring out how to split the environment into foreground and background. I've been thinking of getting the value of "distance from camera" (through "display > heads up display > object details") so I can use it to split into foreground and background using the value of distance character from camera as a guide.
The problem is that I don't know how to get it's value in python. So can someone help me please?
I'm using Maya 2016.
I got "none" in this command :
import maya.cmds as cmds
print cmds.headsUpDisplay('HUDObjDetDistFromCam', q=1)
Rather than trying to hijack object distance from camera, you can just calculate it yourself.
import math
import maya.cmds as cmds
def distance_to_camera(obj, cam):
cam_pos = cmds.xform(cam, t=True, ws=True, q=True)
object_pos = cmds.xform(obj, t=True, ws=True, q=True)
raw_dist = [a-b for a, b in zip(cam_pos, object_pos)]
return math.sqrt (sum([a**2 for a in raw_dist]))
distance_to_camera('pCube1', 'persp')
raw_dist = [a-b for a, b in zip(cam_pos, object_pos)] is taking two lists of 3 numbers (the positions) and subtracting each item in one list from it's opposite number in the other.
math.sqrt (sum([a**2 for a in raw_dist])) is the square root of the squares of the three numbers in raw_dist -- that is, the distance. You can do this using the Maya API but this version doesn't require any extra imports beside math
I'd like to automate the testing of my PsychoPy Builder experiment to cover a mix of correct/incorrect responses.
I can't find anything in the Help covering this area.
Does anyone have any suggestions?
There is built-in but not well documented support for keyboard-based testing like this, but not mouse: class psychopy.hardware.emulator.ResponseEmulator(threading.thread)
See http://www.psychopy.org/api/hardware/emulator.html and scroll down for ResponseEmulator. This is used in internal testing, and is not just for the fMRI simulator. Maybe it needs more visibility!
I think it would go something like this:
from psychopy.hardware.emulator import ResponseEmulator
simulated_responses = [(2.3, 'a'), (7.5, 'b')]
responder = ResponseEmulator(simulated_responses)
responder.start()
and you'd get a 'a' key happening at 2.3 sec after the .start(), then a 'b' 7.5 sec after the .start(), just as if a person pressed that key at that time (maybe not frame-accurate but very close).
For the record, with a bit of surfing + experimenting I came up with the following that fitted the bill for me.
1/ add a code block to import the following libraries:
import win32api
import win32con
import time
Then define the keycodes for the input you're looking for e.g.:
VK_CODE = {
'enter':0x0D,
'esc':0x1B,
'spacebar':0x20,
'pageup':0x21,
'pagedown':0x22,
'end':0x23,
'home':0x24,
'left':0x25,
'up':0x26,
'right':0x27,
'down':0x28,
'0':0x30,
'1':0x31,
'2':0x32,
'3':0x33,
'4':0x34,
'5':0x35,
'6':0x36,
'7':0x37,
'8':0x38,
'9':0x39,
'a':0x41,
'b':0x42,
'c':0x43,
'd':0x44,
'e':0x45,
'f':0x46,
'g':0x47,
'h':0x48,
'i':0x49,
'j':0x4A,
'k':0x4B,
'l':0x4C,
'm':0x4D,
'n':0x4E,
'o':0x4F,
'p':0x50,
'q':0x51,
'r':0x52,
's':0x53,
't':0x54,
'u':0x55,
'v':0x56,
'w':0x57,
'x':0x58,
'y':0x59,
'z':0x5A,
'numpad_0':0x60,
'numpad_1':0x61,
'numpad_2':0x62,
'numpad_3':0x63,
'numpad_4':0x64,
'numpad_5':0x65,
'numpad_6':0x66,
'numpad_7':0x67,
'numpad_8':0x68,
'numpad_9':0x69,
'multiply':0x6A,
'add':0x6B,
'separator':0x6C,
'subtract':0x6D,
'decimal':0x6E,
'divide':0x6F,
'f1':0x70,
'f2':0x71,
'f3':0x72,
'f4':0x73,
'f5':0x74,
'f6':0x75,
'f7':0x76,
'f8':0x77,
'f9':0x78,
'f10':0x79,
'f11':0x7A,
'f12':0x7B
}
then, in a code block somewhere within your trial loop, on the 'Begin Routine' tab add:
frame_counter = 0
and on the 'each frame' tab add this
frame_counter +=1
# usually at 60 frames per second , so below we wait for ~1 second
# 'autoResp' below is the column name in your excel results file
# you can change this to whatever you want
#
# *IMPORTANT* Below,
# -replace 'thisTrial' with the name you gave to your trial loop
# -'autoResp' is the column namein the csv file with the desired AUTOMATIC
# keyboard responses in
if frame_counter > 60:
this_resp = VK_CODE[thisTrial['autoResp']]
win32api.keybd_event( this_resp, 0, 0, 0)
time.sleep(.05) # wait a while before doing the key_up ...
win32api.keybd_event( this_resp,0 ,win32con.KEYEVENTF_KEYUP ,0)
frame_counter=0
See the code comments in the snippet above.
This then pulls in 'automated' keypresses from your csv file (column named 'autoResp' in this instance. Nb you can use this to test correct and incorrect scenarios
I am trying this code, and it works well, however is really slow, because the number of iterations is high.
I am thinking about threads, that should increase the performance of this script, right? Well, the question is how can I change this code to works with synchronized threads.
def get_duplicated(self):
db_pais_origuem = self.country_assoc(int(self.Pais_origem))
db_pais_destino = self.country_assoc(int(self.Pais_destino))
condicao = self.condition_assoc(int(self.Condicoes))
origem = db_pais_origuem.query("xxx")
destino = db_pais_destino.query("xxx")
origem_result = origem.getresult()
destino_result = destino.getresult()
for i in origem_result:
for a in destino_result:
text1 = i[2]
text2 = a[2]
vector1 = self.text_to_vector(text1)
vector2 = self.text_to_vector(text2)
cosine = self.get_cosine(vector1, vector2)
origem_result and destino_result structure:
[(382360, 'name abcd', 'some data'), (361052, 'name abcd', 'some data'), (361088, 'name abcd', 'some data')]
From what I can see you are computing a distance function between pairs of vectors. Given a list of vectors, v1, ..., vn, and a second list w1,...wn you want the distance/similarity between all pairs from v and w. This is usually highly amenable to parallel computations, and is sometimes referred to as an embarassingly parallel computation. IPython works very well for this.
If your distance function distance(a,b) is independent and does not depend on results from other distance function values (this is usually the case that I have seen), then you can easily use ipython parallel computing toolbox. I would recommend it over threads, queues, etc... for a wide variety of tasks, especially exploratory. However, the same principles could be extended to threads or queue module in python.
I recommend following along with http://ipython.org/ipython-doc/stable/parallel/parallel_intro.html#parallel-overview and http://ipython.org/ipython-doc/stable/parallel/parallel_task.html#quick-and-easy-parallelism It provides a very easy, gentle introduction to parallelization.
In the simple case, you simply will use the threads on your computer (or network if you want a bigger speed up), and let each thread compute as many of the distance(a,b) as it can.
Assuming a command prompt that can see the ipcluster executable command type
ipcluster start -n 3
This starts the cluster. You will want to adjust the number of cores/threads depending on your specific circumstances. Consider using n-1 cores, to allow one core to handle the scheduling.
The hello world examples goes as follows:
serial_result = map(lambda z:z**10, range(32))
from IPython.parallel import Client
rc = Client()
rc
rc.ids
dview = rc[:] # use all engines
parallel_result = dview.map_sync(lambda z: z**10, range(32))
#a couple of caveats, are this template will not work directly
#for our use case of computing distance between a matrix (observations x variables)
#because the allV data matrix and the distance function are not visible to the nodes
serial_result == parallel_result
For the sake of simplicity I will show how to compute the distance between all pairs of vectors specified in allV. Assume that each row represents a data point (observation) that has three dimensions.
Also I am not going to present this the "pedagoically corret" way, but the way that I stumbled through it wrestling with the visiblity of my functions and data on the remote nodes. I found that to be the biggest hurdle to entry
dataPoints = 10
allV = numpy.random.rand(dataPoints,3)
mesh = list(itertools.product(arange(dataPoints),arange(dataPoints)))
#given the following distance function we can evaluate locally
def DisALocal(a,b):
return numpy.linalg.norm(a-b)
serial_result = map(lambda z: DisALocal(allV[z[0]],allV[z[1]]),mesh)
parallel_result = dview.map_sync(lambda z: DisALocal(allV[z[0]],allV[z[1]]),mesh)
#will not work as DisALocal is not visible to the nodes
#also will not work as allV is not visible to the nodes
There are a few ways to define remote functions.
Depending on whether we want to send our data matrix to the nodes or not.
There are tradeoffs as to how big the matrix is, whether you want to
send lots of vectors individually to the nodes or send the entire matrix
upfront...
#in first case we send the function def to the nodes via autopx magic
%autopx
def DisARemote(a,b):
import numpy
return numpy.linalg.norm(a-b)
%autopx
#It requires us to push allV. Also note the import numpy in the function
dview.push(dict(allV=allV))
parallel_result = dview.map_sync(lambda z: DisARemote(allV[z[0]],allV[z[1]]),mesh)
serial_result == parallel_result
#here we will generate the vectors to compute differences between
#and pass the vectors only, so we do not need to load allV across the
#nodes. We must pre compute the vectors, but this could, perhaps, be
#done more cleverly
z1,z2 = zip(*mesh)
z1 = array(z1)
z2 = array(z2)
allVectorsA = allV[z1]
allVectorsB = allV[z2]
#dview.parallel(block=True)
def DisB(a,b):
return numpy.linalg.norm(a-b)
parallel_result = DisB.map(allVectorsA,allVectorsB)
serial_result == parallel_result
In the final case we will do the following
#this relies on the allV data matrix being pre loaded on the nodes.
#note with DisC we do not import numpy in the function, but
#import it via sync_imports command
with dview.sync_imports():
import numpy
#dview.parallel(block=True)
def DisC(a):
return numpy.linalg.norm(allV[a[0]]-allV[a[1]])
#the data structure must be passed to all threads
dview.push(dict(allV=allV))
parallel_result = DisC.map(mesh)
serial_result == parallel_result
All the above can be easily extended to work in a load balanced fashion
Of course, the easiest speedup (assuming if distance(a,b) = distance(b,a)) would be the following. It will only cut run time in half, but can be used with the above parallelization ideas to compute only the upper triangle of the distance matrix.
for vIndex,currentV in enumerate(v):
for wIndex,currentW in enumerate(w):
if vIndex > wIndex:
continue#we can skip the other half of the computations
distance[vIndex,wIndex] = get_cosine(currentV, currentW)
#if distance(a,b) = distance(b,a) then use this trick
distance[wIndex,vIndex] = distance[vIndex,wIndex]
In order to understand blender python game scripting, I currently try to build a scene in which one can walk around a sphere, using the FPSController structure from this link. For gravity and FPSController orientation I tried to construct a python Controller, which currently looks like this:
def main():
print("Started")
controller = bge.logic.getCurrentController()
me = controller.owner
distance, loc, glob = me.getVectTo((0,0,0))
grav = controller.actuators['Gravity']
strength = me['Gravity']
force = strength*(distance*distance)*glob
grav.force = force
try:
rot = Vector((0,0,-1)).rotation_difference(glob).to_matrix()
except Exception as E:
print(E)
rot = (0,0,0)
rotZ = me.orientation
me.orientation = rot*rotZ
controller.activate(grav)
main()
which roughly works until any angle goes over 180 degrees, and looks discontinuous then. I assume this comes from rotation_difference being discontinuous – blender documentation on Math Types & Utilities does not say anything, and I have not thought enough about quaternionic representations yet to see if a continuous map would be possible – and I guess there is a more elegant way to achieve that the local Z orientation is continuously mouse-dependent, while local X and Y orientations depend continuously on some given vector, but how?
The consensus seems to be that you should accomplish such rotations using quaternions.
See this for the api: http://www.blender.org/documentation/249PythonDoc/Mathutils.Quaternion-class.html
See this for an introduction to the maths: http://en.wikipedia.org/wiki/Rotation_formalisms_in_three_dimensions#Quaternions
There is a allign-function. If the game-object is called own it should be something like own.alignAxisToVect(vector, 2, 1) with 2 being the index for the Z-axis(x=0,y=1,z=2) and 1 being the speed of allignment (between 0 and 1)