Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to display an array of image thumbnails using Python and Qt4. My problem is that I don't want to calculate the amount of columns for the grid, so that when the application is resized or my thumbnails get bigger, the number of columns automatically change.
Actually I want to use Qlabel, because images are going to have file names and possibly buttons. Is there an easy way to do it?
Something like that:
Brendan Abel's answer is the right and elegant way to use the power of Qt. However, if you find model-view architecture too heavy, I'd suggest you to use FlowLayout demonstrated in here.
Its rather quite easy to implement and may suit your needs.
You should look into using a QGraphicsView. It's a good building block for truly custom widgets that don't really resemble any of the built-in widgets. It uses a model/view architecture and allows you pretty much unlimited flexibility how and where each item is drawn, as opposed to relying on the more limited QLayout system of placement.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way?
I couldn't find anything that can help me and I don't know how to translate C to python.
I just need a way to transform the camera that can help me understand how it works.
To say it bluntly: There is no such thing as a "camera" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume.
The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better.
You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which "the world" will appear in the first place. Any notion of a "camera" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation.
So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way:
What kind of transformations do I have to chain up, to give a tuple of numbers that are called "position" an actual meaning, by letting this position turn up at a certain place on the visible screen?
You really ought to think that way, because that is what's actually happening.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Well currently I'm working on a personal project which is the identification of products in a scanned image taken from a store catalog.
As you may see in the image there's no lines separation between products, so using Hough lines to locate the products won't really solve the problem!
Using Tesseract is really amazing to extract the image content, the only problem that I'm facing is finding the image products automatically, I mean not cropping the image manually but I want to detect the products, cropping them with their text description and price and then extract content using OCR.
I have tried many image processing techniques but still nothing (I'm using Python and OpenCV).
Thanks in advance :)
The problem you have is usually called background removal, or alternatively foreground extraction. In this example, it might actually be relatively easy, as the background is mostly in shades of the same color - my recommendation would be to look at the GrabCut algorithm which is described here: https://docs.opencv.org/3.4.3/d8/d83/tutorial_py_grabcut.html
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Which is the best way to compare two images from same domain, different features in python. Histogram or Image quality functions ?
I have two images for different scenes, the contents inside the images are different, but both of the images are taken during morning.
I want to compare how much these two images are related to each other ?, like my important metric, is to say that these two images were taken during morning for example, even if different contents.
Any idea or way how to do this ?
There is no easy answer to your question. It depends on how do you consider images similar or different. And this is a subjective measure that is totally dependable on what do you want to do with this information.
Anyway, for this kind of problems, opencv is your friend. Here I list some ideas:
use histograms: cv2.histogram https://docs.opencv.org/3.1.0/d1/db7/tutorial_py_histogram_begins.html
with histograms you can know how blueish, greenish or redish an image is. You can compare if 2 images are in the same range (bin) of a specific color. This is something very common when you want to detect skin color.
if you have an specific object that appears in different images, use SIFT or SURF.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So I am attempting to make a little audio player using Pygame. I wanted to add a little audio visualizer similar to in Windows Media Player. I was thinking of starting with an audio wave that scrolls across the screen. But I'm not sure where to start.
Right now I'm just using pygame.mixer to start, stop, and pause the music. I think I would have to use pygame.sndarray and get some samples but I don't know what to do from there. What can I do to turn those samples into a visual audio wave?
Check out the pygame.draw methods.
You can probably take the audio values and map them to one of the draw options - like draw.arc or draw.line. You will have to map the signal output to values that remain within the X and Y max and min of the viewport.
Processing can do the same thing, but is a bit easier to implement if you are interested in learning the scripting language. It has methods specifically for doing the mapping for you and you can do some pretty extreme visuals without a lot of code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm trying to check the width of an object in another scene. The object in the other scene will be imported as an reference, but I need to know the width/height/depth (x/y/z bounding box) of the object in order to match a number of them into my scene according to parameters set by a script of mine.
The only way I've figured so far is to reference the object into the scene, check the bounding box with the xform command and then remove the reference and then proceed as normal. That solution seems both a bit slow (for large objects) and a bit awkward.
There's no way to interact with a Maya scene without it already in Maya. I think your method is correct.
What do you mean by "match a number of them into my scene"? Do you mean you want to make multiple references, based on the size? I.E. you want to fill up a given volume using the bounding box to determine how many will be needed? It seems that could be done after making one reference as easily as not.
There's no other way to check than opening the file.
You could do an an offline batch process to collect all of the information once and save it to a database or simple file such as a CSV for faster access if speed is really an issue.