2D image projections to 3D Volume [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume.
For example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it.
The object I need to convert is very similar to a cylinder (so the program doesn't have to 'understand' what type of object it is).

There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement:
First of all, "a 3D volume" is not a clear definition of what you want. There is not one way to store this information. A usual way (for computer games and animations) is to store it as a mesh with textures. Getting the textures is easy: you have the photographs. Creating the mesh can be really hard, depending on what exactly you want.
You say your object looks like a cylinder. If you want to just stitch your images together and paste them as a texture over a cylindrical mesh, that should be possible. If you know the angles at which the images are taken, the stitching will be even easier.
However, the really cool thing that most people would want is to create any mesh, not just a cylinder, based on the stitching "errors" (which originate from the parallax effect, and therefore contain information about the depth of the pictures). I know Autodesk (the makers of AutoCAD) have a web-based tool for this (named 123-something), but they don't let you put it into your own program; you have to use their interface. So it's fine for getting a result, but not as a basis for a program of your own.
Once you have the mesh, you'll need a viewer (not view first, save later; it's the other way around). You should be able to use any 3D drawing program, for example Blender can view (and edit) many file types.

Related

Programmatically generate image layout [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 10 months ago.
Improve this question
I'd like to create a python script that would receive some text and photos, arrange and compose them following some rules, and output a final image. To do so, I would need a python library that could:
Read, scale and move pictures to create composite images.
Insert text and maybe some simple glyphs (circles, arrows)
Apply masks to images.
I've started using pycairo to that end, and while it is certainly very capable, it's rather slow and most certainly not the right tool for the job; it's vector graphics library, after all. There's Pillow as well, but I reckon it's too low-level.
Is there a python library better-suited to that task?
Opencv is the library that is used mostly in imaging solutions. I will post some templates for the people who might be looking for these functions.
1)Read, scale and move pictures to create composite images.
import cv2
cv2.imread("Image path")
cv2.resize(original image,size)
cv2.
is the way you can read an image in OpenCV. It is given to you as an array and with the resize function that should settle it out. For creating composite images, you can also do it with openCV as well here is a template I have gotten from here.
import numpy as np
import cv2
A = cv2.imread(r"C:\path\to\a.png", 0)
B = cv2.imread(r"C:\path\to\b.png", 0)
#C = cv2.merge((B,A,B))
C = np.dstack((B,A,B))
cv2.imshow("imfuse",C)
cv2.waitKey(0)
Insert text and maybe some simple glyphs (circles, arrows)
cv2.putText()
can definitely solve your issue. It takes the image and the text as an argument. For inserting glyphs there are some other functions for that. One of those which is:
cv2.arrowedLine()
Apply masks to images.
You can also apply masks to images. This is not a one liner here so I will leave a good link that I was relying on here.
For clarification as #martineau said you can do these with pillow but it might need some extra work on your part. And for the part that where you might need a smaller library you might consider using OpenCVlite but I haven't had any experience with it yet.

PyOpenGL camera system [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way?
I couldn't find anything that can help me and I don't know how to translate C to python.
I just need a way to transform the camera that can help me understand how it works.
To say it bluntly: There is no such thing as a "camera" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume.
The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better.
You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which "the world" will appear in the first place. Any notion of a "camera" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation.
So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way:
What kind of transformations do I have to chain up, to give a tuple of numbers that are called "position" an actual meaning, by letting this position turn up at a certain place on the visible screen?
You really ought to think that way, because that is what's actually happening.

Getting the dimensions of objects is other Maya scenes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm trying to check the width of an object in another scene. The object in the other scene will be imported as an reference, but I need to know the width/height/depth (x/y/z bounding box) of the object in order to match a number of them into my scene according to parameters set by a script of mine.
The only way I've figured so far is to reference the object into the scene, check the bounding box with the xform command and then remove the reference and then proceed as normal. That solution seems both a bit slow (for large objects) and a bit awkward.
There's no way to interact with a Maya scene without it already in Maya. I think your method is correct.
What do you mean by "match a number of them into my scene"? Do you mean you want to make multiple references, based on the size? I.E. you want to fill up a given volume using the bounding box to determine how many will be needed? It seems that could be done after making one reference as easily as not.
There's no other way to check than opening the file.
You could do an an offline batch process to collect all of the information once and save it to a database or simple file such as a CSV for faster access if speed is really an issue.

Python tools to visualize 100k Vertices and 1M Edges? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking to visualize the data, hopefully make it interactive. Right now I'm using NetworkX and Matplotlib, which maxes out my 8gb when I attempt to 'draw' the graph. I don't know what options and techniques exist for handling such a large cluster** of data. If someone could point me in the right direction, that'd be great. I also have a CUDA enabled GFX card if that could be of use.
Right now I'm thinking of drawing only the most connected nodes, say top 5% of vertices with the most edges, then filling in less connected nodes as the user zooms or clicks.
I don't have any experience with it, but tulip seems to be made for that.
Maybe PyOpenGL? It can be used together with wxPython.
Edit: Just tried the performance without any optimization, it takes 0.2s to draw 100k vertices and 4s to draw 1M edges.
You should ask on the official wxPython mailing list. There are people there that can probably help you. I am surprised that matplotlib isn't able to do this though. It may just require you to restructure your code in some way. Right now, the main ways to draw in wxPython are via the various DCs, one of the FloatCanvas widgets or for graphing, wx.Plot or matplotlib.
Have you considered graphviz? Not interactive although it was designed from the outset to handle very large graphs (although 1M edges may be beyond even it's capabilities).
There's a python module (pydot) that makes interacting with graphviz simple. Again, can't say for sure it'll scale to your levels. However, it should be easy to find out: installation of both is simple.
hth.
Have you considered using ParaView or VisIt? These are two interactive plotting programs which are designed to deal with and plot (very!) large data sets. They both also have a Python scripting interface, so you can automate/control your visualizations from within the Python interpreter.
Have you tried Gephi ?
I believe it scales very well.

Best 3D library to model robotic motion [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A short while I asked for suggestions on choosing a Python-compatible 3D graphics library for robotic motion modelling (using inverse kinematics in Python). After doing a bit of research and redefining my objectives I hope I can ask once again for a bit of assistance.
At the time I thought Blender was the best option - but now I'm having doubts. One key objective I have is the ability integrate the model into a custom GUI (wxPython). Seems like this might be rather difficult (and I'm unsure of the performance requirements).
I think I'm now leaning more towards OpenGL (PyOpenGL + wxglcanvas), but I'm still struggling to to determine if it's the right tool for the job. I'm more of a CAD person, so I have trouble envisioning how to draw complex objects in the API and create motion. I read I could design the object in, say Blender, then import it into OpenGL somehow, but I'm unsure of the process? And how difficult is manipulating motion of objects? For example, if I create a joint between two links, and I move one link, would the other link move dynamically according to the first, or would I need to program each link's movement independently?
Have I missed any obvious tools? I'm not looking for complete robotic modelling packages, I would like to start from scratch so I can incorporate it into my own program. For for learning more than anything. So far I've already looked into vPython, Pyglet, Panda3D, Ogre, and several professional CAD packages.
Thanks
There is a similar project going on that implements a robotic toolbox for matlab and python, it has "Rudimentary 3D graphics", but you can always interface it with blender with a well knit script, it will be less work than reinventing the wheel
If movements can be pre-computed, you can use Blender, hand-craft animations, bake them in some animated file format (cal3d ?), and just render in your wxPython OpenGL window.
If you need to handle user input, you can use a physics engine... I hear Bullet has Python bindings : http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?p=&f=9&t=4030 (probably still unstable).
Regarding your doubts on Blender/OpenGL : What do you mean by "complex objects" ? How many "robots/whatever" ? How many triangle per robot ? How many articulations per robot ? (I'll edit my answer depending on yours)
Anyway, OpenGL in itself won't do anything else that juste "display triangles" ; everything else has to be done eslewhere.
EDIT
Sorry for the delay, I completely forgot.
So here is what I suggest :
Model your robot in Blender with polygons. You can easily go at >10k polygons, but try to keep the number of objects small (1 object per moving part)
Rig it, i.e. create a skeleton for it. You don't need to animate it.
Export as Collada or X3D
In your own OpenGL app, reimport
Draw your objects at the positions and orientations specified by the skeleton
Modify the angles between the bones just as you would do with real stepper motors
If step #5 was done right, robot should be follow the movements
Optionally add physics ( for instance with Bullet ). The API will be similar in concept to OpenGL, and you will be able to catch objects with your robotic arm.
good luck !

Categories