Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A short while I asked for suggestions on choosing a Python-compatible 3D graphics library for robotic motion modelling (using inverse kinematics in Python). After doing a bit of research and redefining my objectives I hope I can ask once again for a bit of assistance.
At the time I thought Blender was the best option - but now I'm having doubts. One key objective I have is the ability integrate the model into a custom GUI (wxPython). Seems like this might be rather difficult (and I'm unsure of the performance requirements).
I think I'm now leaning more towards OpenGL (PyOpenGL + wxglcanvas), but I'm still struggling to to determine if it's the right tool for the job. I'm more of a CAD person, so I have trouble envisioning how to draw complex objects in the API and create motion. I read I could design the object in, say Blender, then import it into OpenGL somehow, but I'm unsure of the process? And how difficult is manipulating motion of objects? For example, if I create a joint between two links, and I move one link, would the other link move dynamically according to the first, or would I need to program each link's movement independently?
Have I missed any obvious tools? I'm not looking for complete robotic modelling packages, I would like to start from scratch so I can incorporate it into my own program. For for learning more than anything. So far I've already looked into vPython, Pyglet, Panda3D, Ogre, and several professional CAD packages.
Thanks
There is a similar project going on that implements a robotic toolbox for matlab and python, it has "Rudimentary 3D graphics", but you can always interface it with blender with a well knit script, it will be less work than reinventing the wheel
If movements can be pre-computed, you can use Blender, hand-craft animations, bake them in some animated file format (cal3d ?), and just render in your wxPython OpenGL window.
If you need to handle user input, you can use a physics engine... I hear Bullet has Python bindings : http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?p=&f=9&t=4030 (probably still unstable).
Regarding your doubts on Blender/OpenGL : What do you mean by "complex objects" ? How many "robots/whatever" ? How many triangle per robot ? How many articulations per robot ? (I'll edit my answer depending on yours)
Anyway, OpenGL in itself won't do anything else that juste "display triangles" ; everything else has to be done eslewhere.
EDIT
Sorry for the delay, I completely forgot.
So here is what I suggest :
Model your robot in Blender with polygons. You can easily go at >10k polygons, but try to keep the number of objects small (1 object per moving part)
Rig it, i.e. create a skeleton for it. You don't need to animate it.
Export as Collada or X3D
In your own OpenGL app, reimport
Draw your objects at the positions and orientations specified by the skeleton
Modify the angles between the bones just as you would do with real stepper motors
If step #5 was done right, robot should be follow the movements
Optionally add physics ( for instance with Bullet ). The API will be similar in concept to OpenGL, and you will be able to catch objects with your robotic arm.
good luck !
Related
I have been thinking about my final year project topic and to be honest I want to create something GREAAAT like many others. I know C,C++,Java and Python (Python is getting quite popular these days).. I want to create a small-scale application like Blender (graphics rendering software) any tips for me? I prefer using OpenGL and it's shading language rather than Direct3D since it is open-source.
Tell me the stuffs I should know to pull this off and also if the combination of python and OpenGL a good choice for this application ?
I want to create a small-scale application like Blender (graphics rendering software) any tips for me?
Yes: Readjust your perception of software size/complexity. I occasionally contribute (TBH, it's been years since I submitted something substantial) to Blender and over the years it turned into a mighty suite. But the codebase is just as large.
A small object-viewer should definitely be possible. That's something you can build and add features upon, depending on how much time is left. I would do the visualization and movement in your scene first, then some basic interactions with your objects (translating, rotating, etc.). The final step would be adding tools (edit polygons, sculpt, etc.). If you are fit enough in C++, OpenGL and Software-Architecture on a larger scale it should be doable.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume.
For example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it.
The object I need to convert is very similar to a cylinder (so the program doesn't have to 'understand' what type of object it is).
There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement:
First of all, "a 3D volume" is not a clear definition of what you want. There is not one way to store this information. A usual way (for computer games and animations) is to store it as a mesh with textures. Getting the textures is easy: you have the photographs. Creating the mesh can be really hard, depending on what exactly you want.
You say your object looks like a cylinder. If you want to just stitch your images together and paste them as a texture over a cylindrical mesh, that should be possible. If you know the angles at which the images are taken, the stitching will be even easier.
However, the really cool thing that most people would want is to create any mesh, not just a cylinder, based on the stitching "errors" (which originate from the parallax effect, and therefore contain information about the depth of the pictures). I know Autodesk (the makers of AutoCAD) have a web-based tool for this (named 123-something), but they don't let you put it into your own program; you have to use their interface. So it's fine for getting a result, but not as a basis for a program of your own.
Once you have the mesh, you'll need a viewer (not view first, save later; it's the other way around). You should be able to use any 3D drawing program, for example Blender can view (and edit) many file types.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking to visualize the data, hopefully make it interactive. Right now I'm using NetworkX and Matplotlib, which maxes out my 8gb when I attempt to 'draw' the graph. I don't know what options and techniques exist for handling such a large cluster** of data. If someone could point me in the right direction, that'd be great. I also have a CUDA enabled GFX card if that could be of use.
Right now I'm thinking of drawing only the most connected nodes, say top 5% of vertices with the most edges, then filling in less connected nodes as the user zooms or clicks.
I don't have any experience with it, but tulip seems to be made for that.
Maybe PyOpenGL? It can be used together with wxPython.
Edit: Just tried the performance without any optimization, it takes 0.2s to draw 100k vertices and 4s to draw 1M edges.
You should ask on the official wxPython mailing list. There are people there that can probably help you. I am surprised that matplotlib isn't able to do this though. It may just require you to restructure your code in some way. Right now, the main ways to draw in wxPython are via the various DCs, one of the FloatCanvas widgets or for graphing, wx.Plot or matplotlib.
Have you considered graphviz? Not interactive although it was designed from the outset to handle very large graphs (although 1M edges may be beyond even it's capabilities).
There's a python module (pydot) that makes interacting with graphviz simple. Again, can't say for sure it'll scale to your levels. However, it should be easy to find out: installation of both is simple.
hth.
Have you considered using ParaView or VisIt? These are two interactive plotting programs which are designed to deal with and plot (very!) large data sets. They both also have a Python scripting interface, so you can automate/control your visualizations from within the Python interpreter.
Have you tried Gephi ?
I believe it scales very well.
I am planning to write an simple 3d(isometric view) game in Java using jMonkeyEngine - nothing to fancy, I just want to learn something about OpenGL and writing efficient algorithms (random map generating ones).
When I was planning what to do, I started wondering about switching to Python. I know that Python didn't come into existence to be a tool to write 3d games, but is it possible to write good looking games with this language?
I have in mind 3d graphics, nice effects and free CPU time to power to rest of game engine? I had seen good looking java games - and too be honest, I was rather shocked when I saw level of detail achieved in Runescape HD.
On the other hand, pygame.org has only 2d games, with some starting 3d projects. Are there any efficient 3d game engines for python? Is pyopengl the only alternative? Good looking games in python aren't popular or possible to achieve?
I would be grateful for any information / feedback.
If you are worried about 3D performance: Most of the performance-critical parts will be handled by OpenGL (in a C library or even in hardware), so the language you use to drive it should not matter too much.
To really find out if performance is a problem, you'd have to try it. But there is no reason why it cannot work in principle.
At any rate, you could still optimize the critical parts, either in Python or by dropping to C. You still gain Python's benefit for most of the game engine which is less performance-critical.
Yes. Eve Online does it.
http://support.eve-online.com/Pages/KB/Article.aspx?id=128
I did a EuroPython talk about my amateur attempts to drive OpenGL from Python:
http://pyvideo.org/video/381/pycon-2011--algorithmic-generation-of-opengl-geom
The latest version of the code I'm talking about is here:
https://github.com/tartley/gloopy
It's billed as a 'library', but that was naive of me: It's a bunch of personal experimental code.
Nevertheless, it demonstrates that you can move around hundreds of bits of geometry at 60fps from Python.
Although the demo above is fairly bare-bones in that it uses simply geometry and untextured faces, one thing I found is that more detailed geometry, texture mapping or other more modern graphics effects don't substantially affect the framerate. Or at least they don't affect it any worse than using the same effects in a C program. These are run on the GPU, so it doesn't make any difference at all if your program is written in Python.
One thing that is performance-sensitive from Python is if you are creating dynamic geometry on the CPU side, e.g. moving individual vertices within a shape, by bending or melting the shape. Doing this sort of per-vertex calculation in Python, then constructing a new ctypes array from the result, then shunting this geometry to the GPU, every frame, will be slow. Instead you should probably be doing this in a vertex shader.
On the other hand, if you just want affine transformations (moving objects around, rotating them, opening chests of drawers, rotating car wheels, bending a jointed robot arm) then all of this can be done by the GPU and the fact your program is written in Python makes little difference to the performance.
You might want to check out Python-Ogre. I just messed with it myself, nothing serious, but seems pretty good.
I would recommend pyglet which is a similar system to pygame, but with full bindings to OpenGL. You can start with simple 2D games to get the hang of the system and work up to 3D later. It is a more modern system than PyGame which is built around SDL which itself is a bit long in the tooth these days.
Perhaps a wee bit off topic but, if your goal is to learn Python, how about creating a game using IronPython and XNA? XNA is not OpenGL though, yet I find it an extremely simple 2D/3D engine which is fast and supports Shader Model 3.0.
Check out the Frets on Fire project -- an open source Guitar Hero alternative. It's written in Python and has decent 3D graphics in OpenGL. I would suggest checking out its sources for hints on libraries etc.
There was a Vampires game out a few years ago where most if not all of the code was in Python. Not sure if the 3D routines were in them, but it worked fine.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm looking for a Python library for creating canvases for manipulating geometric shapes. Specifically I need the ability to create arbitrary polygons and place them on the canvas, the polygons need to have the ability to be transparent/have an alpha channel, I need to be able to edit polygons that are currently on the canvas, and I need to be able to get the actual color of a given pixel(the aggregate of all the transparent piece that are there).
Basically I'm trying to make this: http://alteredqualia.com/visualization/evolve/ in python.
I think cairo will do a lot of what you want. They have python bindings, too.
The one requirement that that won't help you with is modifying previously-drawn polygons, but I don't know of any canvas that will do that for you.
Sounds like a job for OpenGL.
My advice is that, whichever library you choose, you make a data structure for your polygons that suits your algorithms so that they can be more simple and readable rather then try to get these algorithms to manipulate a canvas directly. Then you can write the code that draws them separate (i.e. independent) of the main logic.
This discussion on Stackoverflow has some comparisons and code snippets on various GUI toolkits for Python. I'm pretty sure that the QGraphicsView on QT will do transparency. Nokia (nee Troll) make a demo suite for QT that should give you an idea of its capabilities.
Try pyglet. It is a graphics library for Python with OpenGL. If you've done OpenGL programming before, it is certainly the easiest way to get what you want.
I believe the HTML canvas lets you modify elements
It does not. You can check out my HTML canvas tutorial to see how you draw a moving ball; you wipe the screen and draw a new circle at the spot you want.
You can draw simple shapes to a canvas in all of pyglet, pygame, QT, Tkinter, wxPython and cairo.
Generally, you will have objects called "sprites" or "shapes" that represent objects drawn to the screen, and you'll store them all in a container. Then the library or framework will, at every frame, render them all to the canvas. Thus it will seem to the user (you) that you can modify the objects on screen; you set a ball's x and y coordinates and in the next frame it's rendered there. However, at a low level, everything's being wiped and redrawn again.
For computationally intensive animation, a technique called double-buffering will be employed whereby a bitmap in memory will be modified instead of the one onscreen, and then the drawing process will simply be to copy that bitmap to the screen.
alter the item in the list and then create a new canvas, which seems like it would have a significant overhead.
All of the frameworks mentioned above will give you a nice abstraction for the list of objects to draw, so that you won't need to maintain it manually, and you can program as if the sprites/shapes you've drawn can be directly moved onscreen, even though they really aren't at a low level.
Pygame should be able to do this for you.
See pygame.draw.polygon
I believe the HTML canvas lets you modify elements, which makes me believe there might be another canvas that can as well. However, if there is not that would basically require me to keep a separate list of all the polygons and when I wanted to make a change, alter the item in the list and then create a new canvas, which seems like it would have a significant overhead.
Both Qt and wxWidgets have some canvas drawing abilities (Qt calls it GraphicsView). Quick Google searches will get you a lot of examples so you can see if it fits your requirements.