How to display small images in an HMI ? PyQt Python - python

I am making my first software with an HMI.
I'm creating a tool to easily perform contour detection on an image and make small images for each contour that's detected. So far I managed to do all of that. Now I wish to display the small images obtained into my HMI. Currently I stopped after generating and saving the small images on my PC. I thus have them already. I'm now trying to find some way to display them all at once in the tool part of my HMI. The problem is I don't really know where to start to do such.
I'm not asking for a complete working code but just for clues or tips that could help me figure out how I can proceed to display them. I heard about QIcon, could that maybe do it for example ?
To help you understand better my idea, here is a screenshot of my HMI. In red is how I would like to display small images from my computer as a mosaic with some scroll bar to display the next images in case they don't all fit in the space.
Thank you for all the ideas you can give me !!

Related

Is there a way to draw in the "background" using turtle?

Is there a way to have an image be created/drawn entirely without the actual Window that usually pops up when starting a turtle script showing up? The reason for this question is that while doing more research into another problem I posted here:
How to properly interact with turtle canvas/screen sizing?
I found that resizing the screen using maximize on the window actually altered what was capture when using .getcanvas() to be saved.
This wouldn't be a problem if I weren't attempting to create large images, larger than my monitors certainly. (around 15000 x 15000 pixels).
Thus I am wondering if there is a way to have the entire drawing process be done in the background. Without a window popping up at all. This way (I would hope at least) my images aren't becoming distorted or incorrectly sized due to buggy window interactions. As an example when I try to create an image this big, even with turtle.tracer(False) set it still flashes for a small amount of time (as the images are large and take time to complete) and while it is 'open' I cannot switch to it, it does not appear on my screen, it only appears on the task bar, which I can hover over and like with other applications 'preview' it without clicking on it, and it does not show there. However the image will be created and saved. But the dimensions are entirely wrong based on the code I used.
For a minimally repeatable example please look to the hyperlink to my related question. The code and subsequent image of that post is directly related to this question. But as the questions are different in nature I decided to create this post asking it.
Any feedback would be greatly appreciated as I cannot find any information in the documentation on how this might be done if it is possible at all. If anyone knows any good resources to directly contact regarding Turtle then that information would be welcomed as well.
I'm not sure if this will help to much but if you set the turtles speed to 0 then there will be no animation and the turtle will draw the picture instantly.
The code would look something like: turtle.speed(0)

Detect object in an image using openCV python on a raspberry pi

I have a small project that I am tinkering with. I have a small box and I have attached my camera on top of it. I want to get a notification if anything is added or removed from it.
My original logic was to constantly take images and compare it to see the difference but that process is not good even the same images on comparison gives out a difference. I do not know why?
Can anyone suggest me any other way to achieve this?
Well I can suggest a way of doing this. So basically what you can do is you can use some kind of Object Detection coupled with a Machine Learning Algo. So the way this might work is you first train your camera to recongnize the closed box. You can take like 10 pics of the closed box(just an example) and train your program to recognize that closed box. So the program will be able to detect when the box is closed. So when the box is not closed(i.e open or missing or something else) then you can code your program appropriately to fire off a signal or whatever it is you are trying to do. So the first obvious step is to write code for object detection. There are numerous ways of doing this alone like Haar Classification, Support Vector Machines. Once you have trained your program to look for the closed box you can then run this program to predict what's happening in every frame of the camera feed. Hope this answered your question! Cheers!

(Python) MainWindow background image with two overlapping images

I'm trying to make a my main window have a background image. The problem I'm running into is that the background is comprised of 2 separate images. the top image is placed in the center above the bottom image. I can't find any references on how to accomplish this. I'm thinking I might be able to utilize these two methods to do it, but I'm not sure if I'm heading in the right direction or not.
QGraphicsScene.BackgroundLayer
QGraphicsScene.ForegroundLayer
I'm using python3 with pyqt5. Any help pointing me in the right direction would be greatly appreciated. I wasn't able to find much of anything on this so far.
Thanks in advance.
-edit: In case there is confusion, I have to use 2 images because the background is generated from 2 pictures that are scraped from the web during run time. Maybe someone knows of a way to dynamically merge the 2 images together with specific x,y coordinates with a library, then just use the new image as a background?
I figured it out. The easiest way to accomplish what I'm trying to do is to make 2 labels. 1 that covers the entirety of the window, and another that covers the area where the second picture is supposed to go. Then use Pixmap on each label to cast an image to each label. Figure out the x,y offset required to line up the inner image, adjust the label placement, done.

Drawing graphics for LCDs

Im not sure if this is the correct place to ask this but as its more programming related than electronics I am posting here.
I have recently purchased a small LCD to experiment with. Its just one colour. Im using python to control it over serial.
My question is when it comes to the actual drawing. This whole area is completely new to me so I don't know if I am thinking the right way / going down the right path.
I want to be able to draw things on the LCD such as progress bars, animations (such as volume meters etc) and other simple - non text - based things. Just anything really.
In my mind the way I imagine doing this is by using python to draw a complete image of what I want on the LCD. Using PIL / Pillow for example and constantly redrawing and resending to the LCD.
So in the case of the progress bar, everything that's static is the same but then the progress bar rectangle for example would have its width altered each redraw.
I dont know if this it the correct way or if there are better ways or even if there are specific tools / modules for this kind of thing.

Using graphicsmagick, what is a good way to find the coordinates of a small image inside a bigger image?

Question:
Using graphicsmagick, what is a good way to find the coordinates of a small image inside a bigger image?
Explnation:
To explain further, I have a large screen shot that I am working with and would like to find the pixel coordinates of a known icon that is expected to be found somewhere within the screen shot.
Also, if this is not a good library to be using for this purpose, would love to hear suggestions for alternatives that will preferably be compatible with Python.
Thanks so much!
I use "gm display" to do that.
gm display &
Click on the image. Select Transform, then Crop. Put the cursor at the
top left of the small image. Read the coordinates from the small
information window. Select "dismiss"
Note that this is a manual method, which is OK if you are really "working
with the image" on screen. If you are looking for a batch method, it'll
be a little more complex.

Categories