Is there a way to separate two polygon shells in Maya API (OpenMaya)? Just like the cmds.polySeparate function (which i cannot use because it returns the separate nodes in random order, so I cannot know which one to delete and which one to keep in my script. Moreover I'd like rely only on the API and don't mix it with the cmds).
Reading the documentations I thought that
OpenMaya.MFnMesh.extractFaces what was I was looking for, but (differently from what the docs seems to say) it just cuts the selected chunk but leaves it in the same node.
Seems like there is no clean way to do this with the API.
Since I needed to separate the mesh to delete the part I didn't need, I decided to maintain the vertices and the polygons that I wanted to remove from the mesh and create a new mesh without them.
As you can see in this function I just keep the "good" vertices and polygons and then I update the vertices IDs in the poly_connects list.
def regenerate_mesh(source_mesh, vertices_to_delete, poly_to_delete):
points = source_mesh.getPoints(om.MSpace.kWorld)
num_points = len(points)
i = 0
while i < num_points:
p1 = points[i]
for p2 in vertices_to_delete['points']:
if p1.x == p2.x and p1.y == p2.y and p1.z == p2.z:
points.remove(i)
num_points -= 1
break
else:
i += 1
polygon_counts, polygon_connects = source_mesh.getVertices()
i = j = 0
polygon_counts_length = len(polygon_counts)
while i < polygon_counts_length:
k = 0
for poly in poly_to_delete:
if poly == polygon_connects[j:j+polygon_counts[i]]:
for l in range(polygon_counts[i]):
polygon_connects.remove(j)
polygon_counts.remove(i)
polygon_counts_length -= 1
break
else:
while k < polygon_counts[i]:
if polygon_connects[j+k] in vertices_to_delete['indices']:
for l in range(polygon_counts[i]):
polygon_connects.remove(j)
polygon_counts.remove(i)
polygon_counts_length -= 1
break
k += 1
else:
j += k
i += 1
# update indices
for vertex in sorted(vertices_to_delete['indices'], reverse=True):
for index, new_vertex in enumerate(polygon_connects):
if new_vertex > vertex:
polygon_connects[index] -= 1
new_mesh = om.MFnMesh()
new_mesh.create(points, polygon_counts, polygon_connects)
If someone find a cleaner way I will be happy to know and mark it as the solution!
Related
I've trying to implement transition from an amount of space to another which is similar to acceleration and deceleration, except i failed and the only thing that i got from this was this infinite stack of mess, here is a screenshot showing this in action:
you can see a very black circle here, which are in reality something like 100 or 200 circles stacked on top of each other
and i reached this result using this piece of code:
def Place_circles(curve, circle_space, cs, draw=True, screen=None):
curve_acceleration = []
if type(curve) == tuple:
curve_acceleration = curve[1][0]
curve_intensity = curve[1][1]
curve = curve[0]
#print(curve_intensity)
#print(curve_acceleration)
Circle_list = []
idx = [0,0]
for c in reversed(range(0,len(curve))):
for p in reversed(range(0,len(curve[c]))):
user_dist = circle_space[curve_intensity[c]] + curve_acceleration[c] * p
dist = math.sqrt(math.pow(curve[c][p][0] - curve[idx[0]][idx[1]][0],2)+math.pow(curve [c][p][1] - curve[idx[0]][idx[1]][1],2))
if dist > user_dist:
idx = [c,p]
Circle_list.append(circles.circles(round(curve[c][p][0]), round(curve[c][p][1]), cs, draw, screen))
This place circles depending on the intensity (a number between 0 and 2, random) of the current curve, which equal to an amount of space (let's say between 20 and 30 here, 20 being index 0, 30 being index 2 and a number between these 2 being index 1).
This create the stack you see above and isn't what i want, i also came to the conclusion that i cannot use acceleration since the amount of time to move between 2 points depend on the amount of circles i need to click on, knowing that there are multiple circles between each points, but not being able to determine how many lead to me being unable to the the classic acceleration formula.
So I'm running out of options here and ideas on how to transition from an amount of space to another.
any idea?
PS: i scrapped the idea above and switched back to my master branch but the code for this is still available in the branch i created here https://github.com/Mrcubix/Osu-StreamGenerator/tree/acceleration .
So now I'm back with my normal code that don't possess acceleration or deceleration.
TL:DR i can't use acceleration since i don't know the amount of circles that are going to be placed between the 2 points and make the time of travel vary (i need for exemple to click circles at 180 bpm of one circle every 0.333s) so I'm looking for another way to generate gradually changing space.
First, i took my function that was generating the intensity for each curves in [0 ; 2]
Then i scrapped the acceleration formula as it's unusable.
Now i'm using a basic algorithm to determine the maximum amount of circles i can place on a curve.
Now the way my script work is the following:
i first generate a stream (multiple circles that need to be clicked at high bpm)
this way i obtain the length of each curves (or segments) of the polyline.
i generate an intensity for each curve using the following function:
def generate_intensity(Circle_list: list = None, circle_space: int = None, Args: list = None):
curve_intensity = []
if not Args or Args[0] == "NewProfile":
prompt = True
while prompt:
max_duration_intensity = input("Choose the maximum amount of curve the change in intensity will occur for: ")
if max_duration_intensity.isdigit():
max_duration_intensity = int(max_duration_intensity)
prompt = False
prompt = True
while prompt:
intensity_change_odds = input("Choose the odds of occurence for changes in intensity (1-100): ")
if intensity_change_odds.isdigit():
intensity_change_odds = int(intensity_change_odds)
if 0 < intensity_change_odds <= 100:
prompt = False
prompt = True
while prompt:
min_intensity = input("Choose the lowest amount of spacing a circle will have: ")
if min_intensity.isdigit():
min_intensity = float(min_intensity)
if min_intensity < circle_space:
prompt = False
prompt = True
while prompt:
max_intensity = input("Choose the highest amount of spacing a circle will have: ")
if max_intensity.isdigit():
max_intensity = float(max_intensity)
if max_intensity > circle_space:
prompt = False
prompt = True
if Args:
if Args[0] == "NewProfile":
return [max_duration_intensity, intensity_change_odds, min_intensity, max_intensity]
elif Args[0] == "GenMap":
max_duration_intensity = Args[1]
intensity_change_odds = Args[2]
min_intensity = Args[3]
max_intensity = Args[4]
circle_space = ([min_intensity, circle_space, max_intensity] if not Args else [Args[0][3],circle_space,Args[0][4]])
count = 0
for idx, i in enumerate(Circle_list):
if idx == len(Circle_list) - 1:
if random.randint(0,100) < intensity_change_odds:
if random.randint(0,100) > 50:
curve_intensity.append(2)
else:
curve_intensity.append(0)
else:
curve_intensity.append(1)
if random.randint(0,100) < intensity_change_odds:
if random.randint(0,100) > 50:
curve_intensity.append(2)
count += 1
else:
curve_intensity.append(0)
count += 1
else:
if curve_intensity:
if curve_intensity[-1] == 2 and not count+1 > max_duration_intensity:
curve_intensity.append(2)
count += 1
continue
elif curve_intensity[-1] == 0 and not count+1 > max_duration_intensity:
curve_intensity.append(0)
count += 1
continue
elif count+1 > 2:
curve_intensity.append(1)
count = 0
continue
else:
curve_intensity.append(1)
else:
curve_intensity.append(1)
curve_intensity.reverse()
if curve_intensity.count(curve_intensity[0]) == len(curve_intensity):
print("Intensity didn't change")
return circle_space[1]
print("\n")
return [circle_space, curve_intensity]
with this, i obtain 2 list, one with the spacing i specified, and the second one is the list of randomly generated intensity.
from there i call another function taking into argument the polyline, the previously specified spacings and the generated intensity:
def acceleration_algorithm(polyline, circle_space, curve_intensity):
new_circle_spacing = []
for idx in range(len(polyline)): #repeat 4 times
spacing = []
Length = 0
best_spacing = 0
for p_idx in range(len(polyline[idx])-1): #repeat 1000 times / p_idx in [0 ; 1000]
# Create multiple list containing spacing going from circle_space[curve_intensity[idx-1]] to circle_space[curve_intensity[idx]]
spacing.append(np.linspace(circle_space[curve_intensity[idx]],circle_space[curve_intensity[idx+1]], p_idx).tolist())
# Sum distance to find length of curve
Length += abs(math.sqrt((polyline[idx][p_idx+1][0] - polyline[idx][p_idx][0]) ** 2 + (polyline [idx][p_idx+1][1] - polyline[idx][p_idx][1]) ** 2))
for s in range(len(spacing)): # probably has 1000 list in 1 list
length_left = Length # Make sure to reset length for each iteration
for dist in spacing[s]: # substract the specified int in spacing[s]
length_left -= dist
if length_left > 0:
best_spacing = s
else: # Since length < 0, use previous working index (best_spacing), could also jsut do `s-1`
if spacing[best_spacing] == []:
new_circle_spacing.append([circle_space[1]])
continue
new_circle_spacing.append(spacing[best_spacing])
break
return new_circle_spacing
with this, i obtain a list with the space between each circles that are going to be placed,
from there, i can Call Place_circles() again, and obtain the new stream:
def Place_circles(polyline, circle_space, cs, DoDrawCircle=True, surface=None):
Circle_list = []
curve = []
next_circle_space = None
dist = 0
for c in reversed(range(0, len(polyline))):
curve = []
if type(circle_space) == list:
iter_circle_space = iter(circle_space[c])
next_circle_space = next(iter_circle_space, circle_space[c][-1])
for p in reversed(range(len(polyline[c])-1)):
dist += math.sqrt((polyline[c][p+1][0] - polyline[c][p][0]) ** 2 + (polyline [c][p+1][1] - polyline[c][p][1]) ** 2)
if dist > (circle_space if type(circle_space) == int else next_circle_space):
dist = 0
curve.append(circles.circles(round(polyline[c][p][0]), round(polyline[c][p][1]), cs, DoDrawCircle, surface))
if type(circle_space) == list:
next_circle_space = next(iter_circle_space, circle_space[c][-1])
Circle_list.append(curve)
return Circle_list
the result is a stream with varying space between circles (so accelerating or decelerating), the only issue left to be fixed is pygame not updating the screen with the new set of circle after i call Place_circles(), but that's an issue i'm either going to try to fix myself or ask in another post
the final code for this feature can be found on my repo : https://github.com/Mrcubix/Osu-StreamGenerator/tree/Acceleration_v02
I'm currently doing a project, and in the code I have, I'm trying to get trees .*. and mountains .^. to spawn in groups around the first tree or mountain which is spawned randomly, however, I can't figure out how to get the trees and mountains to spawn in groups around a single randomly generated point. Any help?
grid = []
def draw_board():
row = 0
for i in range(0,625):
if grid[i] == 1:
print("..."),
elif grid[i] == 2:
print("..."),
elif grid[i] == 3:
print(".*."),
elif grid[i] == 4:
print(".^."),
elif grid[i] == 5:
print("[T]"),
else:
print("ERR"),
row = row + 1
if row == 25:
print ("\n")
row = 0
return
There's a number of ways you can do it.
Firstly, you can just simulate the groups directly, i.e. pick a range on the grid and fill it with a specific figure.
def generate_grid(size):
grid = [0] * size
right = 0
while right < size:
left = right
repeat = min(random.randint(1, 5), size - right) # *
right = left + repeat
grid[left:right] = [random.choice(figures)] * repeat
return grid
Note that the group size need not to be uniformly distributed, you can use any convenient distribution, e.g. Poisson.
Secondly, you can use a Markov Chain. In this case group lengths will implicitly follow a Geometric distribution. Here's the code:
def transition_matrix(A):
"""Ensures that each row of transition matrix sums to 1."""
copy = []
for i, row in enumerate(A):
total = sum(row)
copy.append([item / total for item in row])
return copy
def generate_grid(size):
# Transition matrix ``A`` defines the probability of
# changing from figure i to figure j for each pair
# of figures i and j. The grouping effect can be
# obtained by setting diagonal entries A[i][i] to
# larger values.
#
# You need to specify this manually.
A = transition_matrix([[5, 1],
[1, 5]]) # Assuming 2 figures.
grid = [random.choice(figures)]
for i in range(1, size):
current = grid[-1]
next = choice(figures, A[current])
grid.append(next)
return grid
Where the choice function is explained in this StackOverflow answer.
For a project I'm working on I'm trying to write some code to detect collisions between non-point particles in a 2D space. My goal is to try to detect collision for a few thousand particles at least a few times per time step which I know is a tall order for python. I've followed this blog post which implements a quadtree to significantly reduce the number pairwise checks I need to make. So where I believe I'm running into issues is this function:
def get_index(self, particle):
index = -1
bounds = particle.aabb
v_midpoint = self.bounds.x + self.bounds.width/2
h_midpoint = self.bounds.y + self.bounds.height/2
top_quad = bounds.y < h_midpoint and bounds.y + bounds.height < h_midpoint
bot_quad = bounds.y > h_midpoint
if bounds.x < v_midpoint and bounds.x + bounds.width < v_midpoint:
if top_quad:
index = 1
elif bot_quad:
index = 2
elif bounds.x > v_midpoint:
if top_quad:
index = 0
elif bot_quad:
index = 3
return index
This function from my initial profiling is the bottleneck and I need it to be blistering fast, because of its high call count. Originally I was just supplying an object axis-aligned bounding box which was working almost at the speed I needed, then realized I had no way of determining which particles may actually be colliding. So now I'm passing in a list of particles to my quadtree constructor and just using the class attribute aabb to get my bounds.
Is there someway I could pass something analogues to a object pointer instead of the whole object? Additionally are there other recommendation to optimize this above code?
Don't know if they'll help, but here are a few ideas:
v_midpoint and h_midpoint are re-calculated for every particle added to the quadtree. Instead, calculate them once when a Quad is initialized, then access them as attributes.
I don't think the and is needed in calculating top_quad. bounds.x + bounds.width < v_midpoint is sufficient. Same for left_quad.
Do the simpler checks first and only do the longer one if necessary: bounds.x > v_midpoint vs. bounds.x + bounds.width < v_midpoint
bounds.x + bounds.width is calculated multiple times for most particles. Maybe bounds.left and bounds.right can be calculated once as attributes of each particle.
No need to calculate bot_quad if top_quad is True. Or visa-versa.
Maybe like this:
def get_index(self, particle):
bounds = particle.aabb
# right
if bounds.x > self.v_midpoint:
# bottom
if bounds.y > self.h_midpoint:
return 3
# top
elif bounds.y + bounds.height < self.h_midpoint:
return 0
# left
elif bounds.x + bounds.width < self.v_midpoint:
# bottom
if bounds.y > self.h_midpoint:
return 2
# top
elif bounds.y + bounds.height < self.h_midpoint:
return 1
return -1
Hi I was wondering if this was possible at all, since I tried it but my variable was always empty. In my project, I'm tracking a static object and a laser pointer via a PiCamera on my raspberry pi, and I calculate the centroids of their contours as (smallx,smally) and (small2x,small2y) respectively.
I use the difference between their coordinates to see if the pointer should go up, down, left, or right in order to meet the 1st static object. After that, it'll choose a direction betweeen 1 through 4 to move, because my direction controls aren't perfectly on an x-y axis and are slanted.
I left the controls and the contour finding out from here and shortened my total code just so that you wouldn't be met with a giant pile of slop to sort through.
EDIT: I' don't think with my understanding I could provide something runnable without posting a couple hundred lines and my little device, but I'll boil it down and post the exact portion of my code where this is relevant. Running Python 2.7.3, using opencv2.4.10
Code:
#import libraries like picamera and opencv
#set empty variables like:
up = down = left = right = set()
smallx = smally = small2x = small2y = 0
#etc etc
with picamera.PiCamera() as camera:
with picamera.array.PiRGBArray(camera) as rawCapture:
#Calibrate my controls with the camera. updates the up, down, left, and right sets.
with picamera.PiCamera() as camera:
with picamera.array.PiRGBArray(camera) as rawCapture:
# Take pictures, threshold them, find contours, append their arrays to list
if len(cnts)>0: #If any objects were identified
contm = sorted(smalList, key=lambda tup: tup[1])
smallest = cnts[smalList[0][0]] #**Take smallest object(my static object)**
smallM = cv2.moments(smallest)
smallx = int(smallM['m10']/smallM['m00']) #**Calculate xcoord**
smally = int(smallM['m01']/smallM['m00']) #**Calculate ycoord**
cv2.line(frame, (smallx,smally), (smallx,smally), 1, 8,0) #Draws centroid
# print(len(cnts))
if len(cnts)==2: #If only 2 objects were identified
smallester = cnts[smalList[1][0]] #** Take pointer object **
small2 = cv2.moments(smallester)
small2x = int(small2['m10']/small2['m00']) #**Calculate xcoord**
small2y = int(small2['m01']/small2['m00']) #**Calculate ycoord**
x = small2x - smallx
y = small2y - smally
print x #These prints return a value
print y
if x < 0: #Difference = Pointer - Object
s1 = right
if x >0:
s1 = left
if y < 0:
s2 = down
if y >0:
s2 = up
print s1, s2 #set([]),set([])
print up,down,left,right #set([1,2]),set([3,4]),set([1,4]),set([2,3])
selecty = s1&s2 #set([])
#Tell the pointer where to go
Should I even be using sets?
Use s1 = s2 = set() instead of = 0.
As for your second question, there are probably better and more known ways to go around your problem. For example, using bit logic:
right = 1
up = 2
left = 4
down = 8
select = 0
if (small2x - smallx) < 0:
select |= right
if (small2x - smallx) >0:
select |= left
if (small2y - smally) < 0:
select |= down
if (small2y - smally) >0:
select |= up
print(select)
print("You chose %s%s%s%s" %("UP " if select & up else "",
"DOWN " if select & down else "",
"LEFT " if select & left else "",
"RIGHT" if select & right else ""))
#Do things after
I've written some python code to calculate a certain quantity from a cosmological simulation. It does this by checking whether a particle in contained within a box of size 8,000^3, starting at the origin and advancing the box when all particles contained within it are found. As I am counting ~2 million particles altogether, and the total size of the simulation volume is 150,000^3, this is taking a long time.
I'll post my code below, does anybody have any suggestions on how to improve it?
Thanks in advance.
from __future__ import division
import numpy as np
def check_range(pos, i, j, k):
a = 0
if i <= pos[2] < i+8000:
if j <= pos[3] < j+8000:
if k <= pos[4] < k+8000:
a = 1
return a
def sigma8(data):
N = []
to_do = data
print 'Counting number of particles per cell...'
for k in range(0,150001,8000):
for j in range(0,150001,8000):
for i in range(0,150001,8000):
temp = []
n = []
for count in range(len(to_do)):
n.append(check_range(to_do[count],i,j,k))
to_do[count][1] = n[count]
if to_do[count][1] == 0:
temp.append(to_do[count])
#Only particles that have not been found are
# searched for again
to_do = temp
N.append(sum(n))
print 'Next row'
print 'Next slice, %i still to find' % len(to_do)
print 'Calculating sigma8...'
if not sum(N) == len(data):
return 'Error!\nN measured = {0}, total N = {1}'.format(sum(N), len(data))
else:
return 'sigma8 = %.4f, variance = %.4f, mean = %.4f' % (np.sqrt(sum((N-np.mean(N))**2)/len(N))/np.mean(N), np.var(N),np.mean(N))
I'll try to post some code, but my general idea is the following: create a Particle class that knows about the box that it lives in, which is calculated in the __init__. Each box should have a unique name, which might be the coordinate of the bottom left corner (or whatever you use to locate your boxes).
Get a new instance of the Particle class for each particle, then use a Counter (from the collections module).
Particle class looks something like:
# static consts - outside so that every instance of Particle doesn't take them along
# for the ride...
MAX_X = 150,000
X_STEP = 8000
# etc.
class Particle(object):
def __init__(self, data):
self.x = data[xvalue]
self.y = data[yvalue]
self.z = data[zvalue]
self.compute_box_label()
def compute_box_label(self):
import math
x_label = math.floor(self.x / X_STEP)
y_label = math.floor(self.y / Y_STEP)
z_label = math.floor(self.z / Z_STEP)
self.box_label = str(x_label) + '-' + str(y_label) + '-' + str(z_label)
Anyway, I imagine your sigma8 function might look like:
def sigma8(data):
import collections as col
particles = [Particle(x) for x in data]
boxes = col.Counter([x.box_label for x in particles])
counts = boxes.most_common()
#some other stuff
counts will be a list of tuples which map a box label to the number of particles in that box. (Here we're treating particles as indistinguishable.)
Using list comprehensions is much faster than using loops---I think the reason is that you're basically relying more on the underlying C, but I'm not the person to ask. Counter is (supposedly) highly-optimized as well.
Note: None of this code has been tested, so you shouldn't try the cut-and-paste-and-hope-it-works method here.