Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way?
I couldn't find anything that can help me and I don't know how to translate C to python.
I just need a way to transform the camera that can help me understand how it works.
To say it bluntly: There is no such thing as a "camera" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume.
The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better.
You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which "the world" will appear in the first place. Any notion of a "camera" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation.
So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way:
What kind of transformations do I have to chain up, to give a tuple of numbers that are called "position" an actual meaning, by letting this position turn up at a certain place on the visible screen?
You really ought to think that way, because that is what's actually happening.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have 4 cameras placed in the corners of a room and would like to estimate the rotation and translation of the cameras to one of those 4 cameras with the help of opencv. I was going to estimate R and t based on the essential matrix between camera 1 and 2, camera 1 and 3, and camera 1 and 4. Since the essential matrix only depends on 2 views I was wondering if there is a smarter way of taking advantage of having 4 views to determine R and t? Are there any good guides or tutorials for such a multi-view calibration available? A quick google search didnt lead to success.
Thank you in advance!
I assume the cameras are already calibrated and that there is a portion of the field of view of the reference camera (the one you refer to as "one of those cameras") that is also visible in all the other cameras.
If this is the case, you only need to match some static points in the scene between each pair of (reference, other) camera, then use solvePnP to get the rotations and translations. You can use a calibration chart (checkerboard or dots) to provide the set of static points.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a set of images. I have to use them for training a network. I want to simulate a lens flare effect and chromatic aberration on the images. I have tried to find some function in OpenCV, scikit and other python image library but no help from there. How can i simulate these effect on my image? Rough idea or code will be useful. Images are in jpg format.
Depends on what kind of lens flare you are trying to achieve. Create e.g. hexagon mask and overlay multiple instances of it partially transparently between start and end point of the flare axis? Hexagons should be at least slightly bigger "in sun's direction" and spaced more or less in equal distance compared to each others. User should be able to click start and end points of said axis from the pic and use e.g. mouse to rotate, zoom in/out the axis and define number of flare elements to be added.
For chromatic aberration, I would split the RGB components, apply slightly different scaling factors, and merge back. Depending on whether you want to simulate a flint or crown effect, the factors will be increasing or decreasing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to display an array of image thumbnails using Python and Qt4. My problem is that I don't want to calculate the amount of columns for the grid, so that when the application is resized or my thumbnails get bigger, the number of columns automatically change.
Actually I want to use Qlabel, because images are going to have file names and possibly buttons. Is there an easy way to do it?
Something like that:
Brendan Abel's answer is the right and elegant way to use the power of Qt. However, if you find model-view architecture too heavy, I'd suggest you to use FlowLayout demonstrated in here.
Its rather quite easy to implement and may suit your needs.
You should look into using a QGraphicsView. It's a good building block for truly custom widgets that don't really resemble any of the built-in widgets. It uses a model/view architecture and allows you pretty much unlimited flexibility how and where each item is drawn, as opposed to relying on the more limited QLayout system of placement.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So I am attempting to make a little audio player using Pygame. I wanted to add a little audio visualizer similar to in Windows Media Player. I was thinking of starting with an audio wave that scrolls across the screen. But I'm not sure where to start.
Right now I'm just using pygame.mixer to start, stop, and pause the music. I think I would have to use pygame.sndarray and get some samples but I don't know what to do from there. What can I do to turn those samples into a visual audio wave?
Check out the pygame.draw methods.
You can probably take the audio values and map them to one of the draw options - like draw.arc or draw.line. You will have to map the signal output to values that remain within the X and Y max and min of the viewport.
Processing can do the same thing, but is a bit easier to implement if you are interested in learning the scripting language. It has methods specifically for doing the mapping for you and you can do some pretty extreme visuals without a lot of code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume.
For example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it.
The object I need to convert is very similar to a cylinder (so the program doesn't have to 'understand' what type of object it is).
There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement:
First of all, "a 3D volume" is not a clear definition of what you want. There is not one way to store this information. A usual way (for computer games and animations) is to store it as a mesh with textures. Getting the textures is easy: you have the photographs. Creating the mesh can be really hard, depending on what exactly you want.
You say your object looks like a cylinder. If you want to just stitch your images together and paste them as a texture over a cylindrical mesh, that should be possible. If you know the angles at which the images are taken, the stitching will be even easier.
However, the really cool thing that most people would want is to create any mesh, not just a cylinder, based on the stitching "errors" (which originate from the parallax effect, and therefore contain information about the depth of the pictures). I know Autodesk (the makers of AutoCAD) have a web-based tool for this (named 123-something), but they don't let you put it into your own program; you have to use their interface. So it's fine for getting a result, but not as a basis for a program of your own.
Once you have the mesh, you'll need a viewer (not view first, save later; it's the other way around). You should be able to use any 3D drawing program, for example Blender can view (and edit) many file types.