I am beginner with NAO programming and I am now working on a project involving arms motion.
I must program a game in which NAO would first stand and point out one among three squares with different colors which would be displayed on the ground.
I think that I can "simply" make Nao move its arm so he would point towards one of three different pre-defined coordinates.
However, animation mode and motion widget do not seem usable for movements with parameters, like one out of the three coordinates.
How do I perform such a move ?
Have you look at the ALMotion.setPositions type of method ?
There are methods working in cartesian space. It means that you just positionnate some end effector (eg the hand) to be at a specific positions compared to the origin of the chest (for instance).
You can see that as a vector pointing to a direction...
The solver used for that could be enhanced, but it's a nice way to achieve what you need to do.
More info there:
http://doc.aldebaran.com/2-1/naoqi/motion/control-cartesian-api.html#ALMotionProxy::setPositions__AL::ALValueCR.AL::ALValueCR.AL::ALValueCR.floatCR.AL::ALValueCR
You could take a look at the pointAt method which takes in parameters the position that you would like to point. If the position of your three objects are known in advance, that would do the job. You can find more here:
http://doc.aldebaran.com/2-1/naoqi/trackers/altracker-api.html#ALTrackerProxy::pointAt__ssCR.std::vector:float:CR.iCR.floatCR
Related
I recently worked on a code that allowed to display a simulation of particles' motions in a periodical space. In concrete terms, it resulted in a 2D plot provided with N points (N ~ 10^4) initially gathered at the center, then spread out according to a matching velocity. As it is a periodical space, any points that would go beyond the upper limit is actually brought back to the lower limit, and vice versa. To illustrate, here are two images :
Initial positions
After a certain time
Each points are supposed to travel horizontally, either to the right or to the left (respectively positive or negative velocity).
I programmed it using Python, but now, in the scope of my project, I'd like to simulate the same thing but on a torus. To give you a good glimpse of how it looked like, please take a look at the following pic :
Transformation from a rectangle to a torus
(Imagine my initial 2D plan is the initial rectangle, which I'd like to transform into the final torus).
Therefore, in that case we would see every particle moving on the surface of the torus. The previous 1st picture would correspond to particles gathered on a "single" circus of the torus, and the previous 2nd picture would correspond to the "filling up" the entire surface of the torus.
Since my code for previous simulations was written in Python, I am wondering if I can still use it for this task. If so, I'd like to have some clues about how to do it, and otherwise, what would be the best language to use for this ?
I hope I have been clear. I apologize in advance for some mistakes I could have done with English.
I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.
I'm planning to generate a flower in Maya2016 where the leafs are not flexible, but they are built up by 4-5 segments connected to each other. I already have the python script which gets a segment as an input parameter and generates the leaf. (And also generates multiple leafs in a circle.)
Now the question is, how to animate it. I'm new in maya animation. I've found a few example how to do basic things like rotation or transformation with python. But what I wish to have is a realistic movement where the segments are moving how they should be.
My idea is to generate a "skeleton" bone and joint point for every segment. And then I wish to animate the skeleton.
Do you think it's a good way of animating the leaf?
How can I set up the skeleton?
How can I animate a skeleton?
(For the last two, I expect a good source like a blogpost, but I also wish to know if my entire concept is good or not.)
Thanks!
I am working on a simple 2d game where many enemies continually spawn and chase the player or players in python + pygame. A problem I ran into, and one many people that have programmed this type of game have run into is that the enemies converge very quickly. I have made a temporary solution to this problem with a function that pushes any two enemies randomly apart if they are too close to each other. This works well but is about an O(n^2) algorithm which is run every frame and at high enemies the program starts to slow down.
When my program runs with this function the enemies seem to form round object I nicknamed a "clump". The clump seems to usually ecliptic but may actually be more complex (not symmetrical) because as the player moves the enemies are being pulled in different directions. I do like the way this clump behaves, however I am wondering if there is a more efficient way to calculate it. Currently every enemy in the clump (often >100) is first moved in the direction of the player, and then pushed apart. If there was instead a way to calculate the figure that the clump creates, and how it moves it would save a lot of computation.
I am not exactly sure how to approach the problem. It may be possible to calculate where the border of the figure moves, and then expand it to make sure the area stays the same.
Also my two functions currently being used to move enemies:
def moveEnemy(enemy, player, speed):
a = player.left-enemy.left
b = player.top-enemy.top
r = speed/math.hypot(a,b)
return enemy.move(r*a, r*b)
def clump(enemys):
for p in range(len(enemys)):
for q in range(len(enemys)-p-1):
a = enemys[p]
b = enemys[p+q+1]
if abs(a.left-b.left)+abs(a.top-b.top)<CLUMP:
xChange = (random.random()-.5)*CLUMP
yChange = ((CLUMP/2)**2-xChange**2)**.5
enemys[p] = enemys[p].move(int(xChange+.5), int(yChange + .5))
enemys[p+q+1] = enemys[p+q+1].move(-int(xChange+.5),-int(yChange+.5))
return enemys
Edit: some screen shots of how the clump looks:
http://imageshack.us/photo/my-images/651/elip.png/
http://imageshack.us/photo/my-images/832/newfni.png/
http://imageshack.us/photo/my-images/836/gamewk.png/
The clump seems to be mostly a round object just stretched (like an eclipse but may be stretched in multiple directions), however it currently has straight edges due to the rectangular enemies.
There are several ways to go about this, depending on your game. Here are some ideas for improving the performance:
Allow for some overlap.
Reduce your distance checking to be done after a fixed number of frames.
Improve your distance checking formula. If you are using the standard distance formula, this can be optimized in many ways. For one, get rid of the square root. Precision doesn't matter, only the relative distance.
Each unit can keep track of a list of nearby units. Only perform your calculations between the units in that list. Every so often, update that list by checking against all units.
Depending on how your game is setup, you can split the field up into areas, such as quadrants or cells. Units only get tested against other units in that cell.
EDIT: When the units get close to their target, it might not behave correctly. I would suggest rather than having them home-in on the exact target from far away, that they actually seek a randomized nearby target. Like an offset from their real target.
I'm sure there are many other ways to improve this, it is pretty open ended after all. I should also point out Boids and flocking which could be of interest.
You could define a clump as a separate object with a fixed number of spacial "slots" for each enemy unit in the clump. Each slot would have a set of coordinates relative to the clump center and would either be empty or would hold a reference to one unit.
A new unit trying to join the clump would move towards the innermost free slot, and once it got there it would "stay in formation", its position always being the position of the slot it occupied. Clumps would have a radius much larger than a single unit, would adjust position to avoid overlapping other clumps or loose units that weren't trying to join the clump, etc.
At some point, though, you'd need to deal with interactions for the individual units in the clump, though, so I'm not sure it's worthwhile. I think Austin Henley's suggestion of splitting the field up into cells/regions and only testing against units in nearby cells is the most practical approach.
I think you're looking for flocking.
The best intro to flocking / steering behavior movement: http://www.red3d.com/cwr/steer/ . See the attached paper red3d paper. And the related OpenSteer
It's a long one so you might want to get that cup of tea/coffee you've been holding off on ;)
I run a game called World of Arl, it's a turn based strategy game akin to Risk or Diplomacy. Each player has a set of cities, armies and whatnot. The question revolves around the display of these things. Currently the map is created using a background image with CSS positioning of team icons on top of that to represent cities. You can see how it looks here: WoA Map
The background image for the map is located here: Map background and created in Omnigraffle. It's not designed to draw maps but I'm hopelessly incompetent with photoshop and this works for my purposes just fine.
The problem comes that I want to perform such fun things as pathfinding and for that I need to have the map somehow stored in code. I have tried using PIL, I have looked at incorporating it with Blender, I tried going "old school" and creating tiles as from many older games and finally I tried to use SVG. I say this so you can see clearly that it's not through lack of trying that I have this problem ;)
I want to be able to store the map layout in code and both create an image from it and use it for things such as pathfinding. I'm using Python but I suspect that most answers will be generic. The cities other such things are stored already and easily drawn on, I want to store the layout of the landmass and features on the landmass.
As for pathfinding, each type of terrain has a movement cost and when the map is stored as just an image I can't access the terrain of a given area. In addition to pathfinding I wish to be able to know the terrain for various things related to the game, cities in mountains produce stone for example.
Is there a good way to do this and what terms should I have used in Google because the terms I tried all came up with unrelated stuff (mapping being something completely different most of the time).
Edit 2:
Armies can be placed anywhere on the map as can cities, well, anywhere but in the water where they'd sink, drown and probably complain (in that order).
After chatting to somebody on MSN who made me go over the really minute details and who has a better understanding of the game (owing to the fact that he's played it) it's occurring to me that tiles are the way to go but not the way I had initially thought. I put the bitmap down as it is now but also have a data layer of tiles, each tile has a given terrain type and thus pathfinding and suchlike can be done on it yet at the same time I still render using Omnigraffle which works pretty well.
I will be making an editor for this as suggested by Adam Smith. I don't know that graphs will be relevant Xynth but I've not had a chance to look into them fully yet.
I really appreciate all those that answered my question, thanks.
I'd store a game map in code as a graph.
Each node would represent a country/city and each edge would represent adjacency. Once you have a map like that, I'm sure you can find many resources on AI (pathfinding, strategy, etc.) online.
If you want to be able to build an image of the map programattically, consider adding an (x, y) coordinate and an image for each node. That way you can display all of the images at the given coordinates to build up a map view.
The key thing to realize here is that you don't have to use just one map. You can use two maps:
The one you already have which is drawn on screen
A hidden map which isn't drawn but which is used for path finding, collision detection etc.
The natural next question then is where does this second map come from? Easy, you create your own tool which can load your first map, and display it. Your tool will then let you draw boundaries around you islands and place markers at your cities. These markers and boundaries (simple polygons e.g.) are stored as your second map and is used in your code to do path finding etc.
In fact you can have your tool emit python code which creates the graphs and polygons so that you don't have to load any data yourself.
I am just basically telling you to make a level editor. It isn't very hard to do. You just need some buttons to click on to define what you are adding. e.g. if you are adding a polygon. Then you can just add each mouse coordinate to an array each time you click on your mouse if you have toggled your add polygon button. You can have another button for adding cities so that each time you click on the map you will record that coordinate for the city and possibly a corresponding name that you can provide in a text box.
You're going to have to translate your map into an abstract representation of some kind. Either a grid (hex or square) or a graph as xynth suggests. That's the only way you're going to be able to apply things like pathfinding algorithms to it.
IMO, the map should be rendered in the first place instead of being a bitmap. What you should be doing is to have separate objects each knowing its dimensions clearly such as a generic Area class and classes like City, Town etc derived from this class. Your objects should have all the information about their location, their terrain etc and should be rendered/painted etc. This way you will have exact knowledge of where everything lies.
Another option is to keep the bitmap as it is and keep this information in your objects as their data. By doing this the objects won't have a draw function but they will have precise information of their placement etc. This is sort of duplicating the data but if you want to go with the bitmap option, I can't think of any other way.
If you just want to do e.g. 2D hit-testing on the map, then storing it yourself is fine. There are a few possibilities for how you can store the information:
A polygon per island
Representing each island as union of a list rectangles (commonly used by windowing systems)
Creating a special (maybe greyscale) bitmap of the map which uses a unique solid colour for each island
Something more complex (perhaps whatever Omnigiraffe's internal representation is)
Asuming the map is fixed (not created on the fly) its "correct" to use a bitmap as graphical representation - you want to make it as pretty as possible.
For any game related features such as pathfinding or whatever fancy stuff you want to add you should add adequate data structures, even if that means some data is redundant.
E.g. describe the boundaries of the isles as polygon splines (either manually or automatically created from the bitmap, thats up to you and how much effort you want to spend and is needed to get the functionality you want).
To sum it up: create data structures matching the problems you have to solve, the bitmap is fine for looks but avoid doing pathfining or other stuff on it.